<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Slides | Next-generation neural computations</title><link>https://laurentperrinet.github.io/slides/</link><atom:link href="https://laurentperrinet.github.io/slides/index.xml" rel="self" type="application/rss+xml"/><description>Slides</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><copyright>This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License Please note that multiple distribution, publication or commercial usage of copyrighted papers included in this website would require submission of a permission request addressed to the journal in which the paper appeared.</copyright><lastBuildDate>Thu, 16 Apr 2026 14:00:00 +0000</lastBuildDate><item><title>Working Memory in SNNs</title><link>https://laurentperrinet.github.io/slides/2026-04-16-cerco/</link><pubDate>Thu, 16 Apr 2026 14:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2026-04-16-cerco/</guid><description>&lt;section&gt;
&lt;!-- no-branding --&gt;
&lt;h1 id="learning-working-memory-in-recurrent-spiking-neural-networks-using-heterogeneous-delays"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-04-16-cerco/?transition=fade" target="_blank" rel="noopener"&gt;Learning Working Memory in Recurrent Spiking Neural Networks Using Heterogeneous Delays&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-04-16-cerco/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="cerco-seminar"&gt;&lt;u&gt;&lt;a href="https://cerco.cnrs.fr" target="_blank" rel="noopener"&gt;Cerco seminar&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-04-16"&gt;[2026-04-16]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;em&gt;Hello&lt;/em&gt;, I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and during this talk at this AIROV workshop on Recent Advances in SNNs.&lt;/p&gt;
&lt;p&gt;Today, I will be speaking about working memory, that is storing patterns with duration of the order of seconds, in spiking neural networks. This is a hard problem as spiking neurons have a limited memory of the order of tens of milliseconds. How can one extend this memory to larger durations? Here, I will be presenting a method for building &lt;em&gt;WM in Spiking Neural Networks by using Heterogeneous Delays&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; Antoine for the invitation and you for listening.
These slides are available from my web-site, along with a number of references. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: first, I&amp;rsquo;ll describe how one may perform computations using Heterogeneous Delays - and present a toy model example; then, I&amp;rsquo;ll show real scale example quantifying the performance on synthetic data.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="spiking-neural-networks-leaky-integrate-and-fire"&gt;Spiking Neural Networks: Leaky Integrate-and-Fire&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-neurobiology"&gt;Spiking Neural Networks: neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-neurobiology-1"&gt;Spiking Neural Networks: neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproducibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-neurobiology-2"&gt;Spiking Neural Networks: neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-neurobiology-3"&gt;Spiking Neural Networks: neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Izhikevich polychronization&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;yet the domain is vast, and there s lot to do in SNNs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-leaky-integrate-and-fire-1"&gt;Spiking Neural Networks: Leaky Integrate-and-Fire&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-heterogeneous-delays"&gt;Spiking Neural Networks: Heterogeneous Delays&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="heterogeneous-delays-spiking-neural-network-hd-snn"&gt;Heterogeneous Delays Spiking Neural Network: HD-SNN&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
We used this theoretical principle in an algorithm for detecting movement in an image. To do this, we first generated event data using natural images that are set in motion along trajectories that resemble those produced by free exploration of the visual scene. You&amp;rsquo;ll notice several features of the event-driven output, such as the fact that faster motion generates more spikes, or that edges oriented parallel to one direction produce few changes, and therefore little spike output - the so-called aperture problem.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-network-polychronization"&gt;Spiking Neural Network: Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_left.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The core idea of the method follows the use of polychronous groups as defined by Izhikevich in 2006. Suppose three presynaptic neurons are connected to two postsynaptic neurons by certains weights and certain delays, which correspond to the time it takes for a spike to travel from one neuron to the next.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-network-polychronization-1"&gt;Spiking Neural Network: Polychronization&lt;/h2&gt;
&lt;figure id="figure-izhikevich-2006httpsdoiorg101162089976606775093882"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_middle.svg" alt="[Izhikevich (2006)](https://doi.org/10.1162/089976606775093882)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://doi.org/10.1162/089976606775093882" target="_blank" rel="noopener"&gt;Izhikevich (2006)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
If we assume these delays are different, then if presynaptic neurons are activated synchronously, then postsynaptic currents do not match in time, such that the membrane potential is not reached.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-network-polychronization-2"&gt;Spiking Neural Network: Polychronization&lt;/h2&gt;
&lt;figure id="figure-izhikevich-2006httpsdoiorg101162089976606775093882"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich.svg" alt="[Izhikevich (2006)](https://doi.org/10.1162/089976606775093882)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://doi.org/10.1162/089976606775093882" target="_blank" rel="noopener"&gt;Izhikevich (2006)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
However, if the timing of presynaptic spikes forms a &lt;em&gt;spiking motif&lt;/em&gt; such that they reach the soma of neuron b_1 at the same time then this neuron will be selectively activated.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-network-polychronization-3"&gt;Spiking Neural Network: Polychronization&lt;/h2&gt;
&lt;figure id="figure-lp-2026httpsarxivorgabs260414096"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/izhikevich_rec.svg" alt="[LP (2026)](https://arxiv.org/abs/2604.14096)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://arxiv.org/abs/2604.14096" target="_blank" rel="noopener"&gt;LP (2026)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Following on this idea - and similar to the original network from Izhikevich - one may build such a process in a recurrent network. Synapses are defined similarly, but act of the same population, not a separate one.&lt;/p&gt;
&lt;p&gt;Given this architecture, and deviating now from Izhikevitch, we may wish to define motifs such that given one context window (green shaded area), it predicts the occurrence of the spikes at the next time step. This allows to create a new context and a new prediction, such that we may build&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="methods--bptt-snn-torch---synthetic-target"&gt;Methods : BPTT (snn Torch) - synthetic target&lt;/h2&gt;
&lt;div class="r-hstack"&gt;
&lt;div style="flex: 1; padding-right: 1rem;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/unrolled.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 1rem;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/pattern.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;We build an implementation of the network using snnTorch - and the delays add just another level of propagation in the unrolled computational graph - here represented by the delay line on the bottom. implementing a 512 neurons network with 41 delays and 8 different patterns&lt;/p&gt;
&lt;p&gt;we define the task as repeating &lt;em&gt;exactly&lt;/em&gt; all spikes from a randomly drawn target with firing probability 1 spike per second. the loss will be the F1-score, that is the harmonic mean between recall and precision. using a fastsigmoid surrogate gradient approximation, the networks learns the target in approximately 10 minutes on a laptop&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="methods--weight-initialization"&gt;Methods : Weight initialization&lt;/h2&gt;
&lt;span class="fragment " &gt;
$$ I_j(t) = \sum_{i=1}^{N} \bigl ( \sum_{d=1}^{D} \mathbf{W}_{j, i, d} \cdot s_i(t-d) \bigr ) $$
$$ u_j(t) = \beta \cdot u_j(t-1) \cdot (1 - s_j(t-1)) + I_j(t) $$
$$ s_j(t) = \mathbf{H}[u_j(t) \geq \vartheta] $$
&lt;/span&gt;
&lt;span class="fragment " &gt;
$$ \mathbf{W} \mathbf{C} \approx \mathbf{S} $$
&lt;/span&gt;
&lt;span class="fragment " &gt;
$$ w_{j, i, d} = \frac{1}{N \cdot D \cdot p_A \cdot M} \sum_{\mu=1}^{M} \sum_{t=D+1}^{T} s_{j}^{\mu}(t) \cdot s_i^{\mu}(t-d) $$
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;However, convergence is quite slow, in particular because some places in the weight space may correspond to non-linear (dead or epileptic) regimes.&lt;/p&gt;
&lt;p&gt;one may however use a weight initiaialization. indeed each prediction can be seen as a linear prediction of the next time step, and one may concatenate alla theses equations together and then invert it to get the weight using a moore penrose pseudo inverse.&lt;/p&gt;
&lt;p&gt;note that since - hence the reason why hebbian-like learning may incidentally work for training such type of networks&lt;/p&gt;
&lt;/aside&gt;
&lt;!--
---
## Results : recall of target with weight intialization
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/target.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt; --&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="results--recall-of-target"&gt;Results : recall of target&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/pattern.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--recall-of-target-1"&gt;Results : recall of target&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/target.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--memory-retrieval"&gt;Results : memory retrieval&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/retrieval.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
As a conclusion, this heterogenous delay spiking neural network provides an efficient model of working memory. We show here
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--memory-retrieval-1"&gt;Results : memory retrieval&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/retrieval.svg" alt="" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
As a conclusion, this heterogenous delay spiking neural network provides an efficient model of working memory. We show here
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--recall-of-target-with-noise"&gt;Results : recall of target with noise&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/fraction_target_init.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--recall-of-target-with-noise-1"&gt;Results : recall of target with noise&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/fraction_target.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--recall-of-target-with-noise-2"&gt;Results : recall of target with noise&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/fraction_target_score.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--recall-of-target-with-noise-3"&gt;Results : recall of target with noise&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/p_flip_target_init.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--recall-of-target-with-noise-4"&gt;Results : recall of target with noise&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/p_flip_target.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--recall-of-target-with-noise-5"&gt;Results : recall of target with noise&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/p_flip_score.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--role-of-parameters"&gt;Results : role of parameters&lt;/h2&gt;
&lt;div class="r-hstack" style="gap: 0.1rem;"&gt;
&lt;div style="flex: 1; margin: 0;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_N_SM.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;div style="flex: 1; margin: 0;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_N_time.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;div style="flex: 1; margin: 0;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_num_delay.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;!-- no-branding --&gt;
&lt;h1 id="learning-working-memory-in-recurrent-spiking-neural-networks-using-heterogeneous-delays-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-04-16-cerco/?transition=fade" target="_blank" rel="noopener"&gt;Learning Working Memory in Recurrent Spiking Neural Networks Using Heterogeneous Delays&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-04-16-cerco/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="cerco-seminar-1"&gt;&lt;u&gt;&lt;a href="https://cerco.cnrs.fr" target="_blank" rel="noopener"&gt;Cerco seminar&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-04-16-1"&gt;[2026-04-16]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
Thanks for your attention.
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>Working Memory in SNNs</title><link>https://laurentperrinet.github.io/slides/2026-04-15-airov/</link><pubDate>Wed, 15 Apr 2026 09:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2026-04-15-airov/</guid><description>&lt;section&gt;
&lt;!-- no-branding --&gt;
&lt;h1 id="learning-working-memory-in-recurrent-spiking-neural-networks-using-heterogeneous-delays"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-04-15-airov/?transition=fade" target="_blank" rel="noopener"&gt;Learning Working Memory in Recurrent Spiking Neural Networks Using Heterogeneous Delays&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-04-15-airov/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="austrian-symposium-on-ai-robotics-and-vision"&gt;&lt;u&gt;&lt;a href="https://airov.at/2026/index.html" target="_blank" rel="noopener"&gt;Austrian Symposium on AI, Robotics and Vision&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-04-15"&gt;[2026-04-15]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;em&gt;Hello&lt;/em&gt;, I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and during this talk at this AIROV workshop on Recent Advances in SNNs.&lt;/p&gt;
&lt;p&gt;Today, I will be speaking about working memory, that is storing patterns with duration of the order of seconds, in spiking neural networks. This is a hard problem as spiking neurons have a limited memory of the order of tens of milliseconds. How can one extend this memory to larger durations? Here, I will be presenting a method for building &lt;em&gt;WM in Spiking Neural Networks by using Heterogeneous Delays&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; Sander Bohté and Sebastian Otte for the organization of this workshop and you for listening.
These slides are available from my web-site, along with a number of references. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: first, I&amp;rsquo;ll describe how one may perform computations using Heterogeneous Delays - and present a toy model example; then, I&amp;rsquo;ll show real scale example quantifying the performance on synthetic data.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="polychronization"&gt;Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_left.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The core idea of the method follows the use of polychronous groups as defined by Izhikevich in 2006. Suppose three presynaptic neurons are connected to two postsynaptic neurons by certains weights and certain delays, which correspond to the time it takes for a spike to travel from one neuron to the next.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="polychronization-1"&gt;Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_middle.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
If we assume these delays are different, then if presynaptic neurons are activated synchronously, then postsynaptic currents do not match in time, such that the membrane potential is not reached.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="polychronization-2"&gt;Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
However, if the timing of presynaptic spikes forms a &lt;em&gt;spiking motif&lt;/em&gt; such that they reach the soma of neuron b_1 at the same time then this neuron will be selectively activated.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="polychronization-3"&gt;Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/izhikevich_rec.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Following on this idea - and similar to the original network from Izhikevich - one may build such a process in a recurrent network. Synapses are defined similarly, but act of the same population, not a separate one.&lt;/p&gt;
&lt;p&gt;Given this architecture, and deviating now from Izhikevitch, we may wish to define motifs such that given one context window (green shaded area), it predicts the occurrence of the spikes at the next time step. This allows to create a new context and a new prediction, such that we may build&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="methods--bptt-snn-torch---synthetic-target"&gt;Methods : BPTT (snn Torch) - synthetic target&lt;/h2&gt;
&lt;div class="r-hstack"&gt;
&lt;div style="flex: 1; padding-right: 1rem;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/unrolled.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 1rem;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/pattern.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;We build an implementation of the network using snnTorch - and the delays add just another level of propagation in the unrolled computational graph - here represented by the delay line on the bottom. implementing a 512 neurons network with 41 delays and 8 different patterns&lt;/p&gt;
&lt;p&gt;we define the task as repeating &lt;em&gt;exactly&lt;/em&gt; all spikes from a randomly drawn target with firing probability 1 spike per second. the loss will be the F1-score, that is the harmonic mean between recall and precision. using a fastsigmoid surrogate gradient approximation, the networks learns the target in approximately 10 minutes on a laptop&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="methods--weight-initialization"&gt;Methods : Weight initialization&lt;/h2&gt;
&lt;span class="fragment " &gt;
$$ I_j(t) = \sum_{i=1}^{N} \bigl ( \sum_{d=1}^{D} \mathbf{W}_{j, i, d} \cdot s_i(t-d) \bigr ) $$
$$ u_j(t) = \beta \cdot u_j(t-1) \cdot (1 - s_j(t-1)) + I_j(t) $$
$$ s_j(t) = \mathbf{H}[u_j(t) \geq \vartheta] $$
&lt;/span&gt;
&lt;span class="fragment " &gt;
$$ \mathbf{W} \mathbf{C} \approx \mathbf{S} $$
&lt;/span&gt;
&lt;span class="fragment " &gt;
$$ w_{j, i, d} = \frac{1}{N \cdot D \cdot p_A \cdot M} \sum_{\mu=1}^{M} \sum_{t=D+1}^{T} s_{j}^{\mu}(t) \cdot s_i^{\mu}(t-d) $$
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;However, convergence is quite slow, in particular because some places in the weight space may correspond to non-linear (dead or epileptic) regimes.&lt;/p&gt;
&lt;p&gt;one may however use a weight initiaialization. indeed each prediction can be seen as a linear prediction of the next time step, and one may concatenate alla theses equations together and then invert it to get the weight using a moore penrose pseudo inverse.&lt;/p&gt;
&lt;p&gt;note that since - hence the reason why hebbian-like learning may incidentally work for training such type of networks&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="results--recall-of-target"&gt;Results : recall of target&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/target.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--role-of-parameters"&gt;Results : role of parameters&lt;/h2&gt;
&lt;div class="r-hstack"&gt;
&lt;div style="flex: 1; padding-right: 1rem;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_N_SM.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 1rem;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_N_time.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 1rem;"&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_num_delay.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--memory-retrieval"&gt;Results : memory retrieval&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/retrieval.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
As a conclusion, this heterogenous delay spiking neural network provides an efficient model of working memory. We show here
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--memory-retrieval-1"&gt;Results : memory retrieval&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/retrieval.svg" alt="" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
As a conclusion, this heterogenous delay spiking neural network provides an efficient model of working memory. We show here
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;!-- no-branding --&gt;
&lt;h1 id="learning-working-memory-in-recurrent-spiking-neural-networks-using-heterogeneous-delays-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-04-15-airov/?transition=fade" target="_blank" rel="noopener"&gt;Learning Working Memory in Recurrent Spiking Neural Networks Using Heterogeneous Delays&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-04-15-airov/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="austrian-symposium-on-ai-robotics-and-vision-1"&gt;&lt;u&gt;&lt;a href="https://airov.at/2026/index.html" target="_blank" rel="noopener"&gt;Austrian Symposium on AI, Robotics and Vision&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-04-15-1"&gt;[2026-04-15]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
Thanks for your attention.
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2026-04-11-intelligence-du-regard</title><link>https://laurentperrinet.github.io/slides/2026-04-11-intelligence-du-regard/</link><pubDate>Sat, 11 Apr 2026 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2026-04-11-intelligence-du-regard/</guid><description>&lt;section&gt;
&lt;h1 id="l"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-04-11-intelligence-du-regard/?transition=fade" target="_blank" rel="noopener"&gt;L&amp;rsquo;intelligence du regard&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-04-11-intelligence-du-regard/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="forum-des-sciences-cognitives-2026"&gt;&lt;u&gt;&lt;a href="https://cognivence.scicog.fr/forum-des-sciences-cognitives/" target="_blank" rel="noopener"&gt;Forum des Sciences Cognitives 2026&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-04-11"&gt;[2026-04-11]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;a href="https://laurentperrinet.github.io/project/art-science/" target="_blank" rel="noopener"&gt;Art-Sciences&lt;/a&gt; /
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt; --&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Bonjour, je me présente : Laurent Perrinet. Je suis très heureux de participer au Forum des Sciences Cognitives et je remercie les organisateurs pour cette invitation. Je suis d&amp;rsquo;autant plus ravi d&amp;rsquo;y participer que je ne vais pas parler de mes recherches habituelles, mais plutôt exposer la collaboration que j&amp;rsquo;ai avec un artiste plasticien à Marseille. Mon objectif : vous convaincre des bénéfices que l&amp;rsquo;on peut tirer à s&amp;rsquo;ouvrir au monde artistique pour mieux percer les mystères de la cognition dans toute sa diversité.&lt;/p&gt;
&lt;p&gt;Les objectifs de cet exposé seront multiples :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Vous faire découvrir certains artistes contemporains qui questionnent notre rapport aux nombres visuels&lt;/li&gt;
&lt;li&gt;Montrer la diversité de la vision à travers les interactions entre art et science&lt;/li&gt;
&lt;li&gt;Dévoiler certains mystères de la vision au travers de l&amp;rsquo;expérience artistique et les applications que cela peut avoir sur notre compréhension de la cognition&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="art--sciences-révèlent-la-diversité-de-notre-vision"&gt;Art &amp;amp; Sciences révèlent la diversité de notre vision&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure id="figure-etienne-reyhttpslaurentperrinetgithubioauthoretienne-rey"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/author/etienne-rey/avatar.jpg" alt="[Etienne Rey](https://laurentperrinet.github.io/author/etienne-rey/)" loading="lazy" data-zoomable width="35%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/author/etienne-rey/" target="_blank" rel="noopener"&gt;Etienne Rey&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
Tout d&amp;rsquo;abord, laissez-moi vous présenter mon acolyte dans cette exploration qui m&amp;rsquo;a permis de lier mon propre projet de recherche avec son travail d&amp;rsquo;artiste plasticien. Je vous présente Etienne Rey, artiste plasticien résident à la Friche Belle de Mai à Marseille. C&amp;rsquo;est un artiste reconnu dont on peut voir les œuvres, soit dans l&amp;rsquo;espace public, soit à Montréal, à Paris ou à Marseille, dans les galeries ou dans des festivals comme Ososphère. Plasticien, ça veut dire créer des œuvres tangibles : tableaux, sculptures ou installations vidéo et interactives.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="art--sciences-révèlent-la-diversité-de-notre-vision-1"&gt;Art &amp;amp; Sciences révèlent la diversité de notre vision&lt;/h2&gt;
&lt;figure id="figure-etienne-rey-2010-spectre-audiographiquehttpsondesparallelesorgprojetscloche-spectre-audiographique-diffraction"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://ondesparalleles.org/wp-content/uploads/2014/02/cloche_fiche_a.jpg" alt="[Etienne Rey (2010) Spectre audiographique](https://ondesparalleles.org/projets/cloche-spectre-audiographique-diffraction/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/cloche-spectre-audiographique-diffraction/" target="_blank" rel="noopener"&gt;Etienne Rey (2010) Spectre audiographique&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Notre collaboration a commencé quand il m&amp;rsquo;a invité à présenter mon travail sur la perception visuelle au vernissage de cette œuvre qui représente une visualisation spatio-temporelle du spectre audiographique du son d&amp;rsquo;une cloche. Il est composé de multiples plaques semi-transparentes et dichroïques, c&amp;rsquo;est-à-dire ayant la capacité de présenter différentes couleurs selon l&amp;rsquo;angle de vue. Ce volume sculptural donne, de façon furtive, toute la profondeur de cette expérience sensorielle.&lt;/p&gt;
&lt;p&gt;C&amp;rsquo;est là que se révèle « L&amp;rsquo;irraisonnable efficacité de la vision » — je reprends les mots de Wigner à propos de la capacité des mathématiques à sonder le monde. Car nous nous retrouvons devant un constat similaire : comment est-il possible avec aussi peu de moyens d&amp;rsquo;obtenir une perception si vivante du monde qui nous entoure ?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="lirraisonnable-efficacité-de-la-vision"&gt;&amp;ldquo;L&amp;rsquo;irraisonnable efficacité de la vision&amp;rdquo;&lt;/h2&gt;
&lt;figure id="figure-nage-de-la-raie-1894-étienne-jules-mareyhttpsfrwikipediaorgwikiétienne-jules_marey"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/9/95/Nage_de_la_raie%2C_Marey%2C_1894.gif" alt="Nage de la raie, 1894 [[Étienne-Jules Marey]](https://fr.wikipedia.org/wiki/Étienne-Jules_Marey)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Nage de la raie, 1894 &lt;a href="https://fr.wikipedia.org/wiki/%c3%89tienne-Jules_Marey" target="_blank" rel="noopener"&gt;[Étienne-Jules Marey]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;J&amp;rsquo;espère vous surprendre en vous montrant ce vol de raie, une nage capturée par Étienne-Jules Marey grâce au procédé de chronophotographie. Je trouve cette image animée remarquable par plusieurs aspects.&lt;/p&gt;
&lt;p&gt;D&amp;rsquo;abord, Marey utilisait carrément un appareil en forme de fusil mitrailleur avec des plaques photographiques en guise de balles pour « shooter » une scène dynamique que l&amp;rsquo;œil humain aurait du mal à décomposer.&lt;/p&gt;
&lt;p&gt;L&amp;rsquo;enjeu est d&amp;rsquo;abord scientifique : comprendre le mouvement. Il a d&amp;rsquo;ailleurs donné son nom à l&amp;rsquo;ISM, l&amp;rsquo;Institute for Scientific Motion.&lt;/p&gt;
&lt;p&gt;Il y a aussi un plaisir artistique, celui qui a été développé jusqu&amp;rsquo;à devenir l&amp;rsquo;industrie cinématographique : une succession d&amp;rsquo;images peut donner la perception d&amp;rsquo;un mouvement fluide. C&amp;rsquo;est là que c&amp;rsquo;est irraisonnable — des images statiques, de qualité médiocre, donnent pourtant une impression vive.&lt;/p&gt;
&lt;p&gt;Et cette capacité n&amp;rsquo;est pas nouvelle.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://upload.wikimedia.org/wikipedia/commons/5/5b/18_PanneauDesLions%28PartieDroite%29BisonsPoursuivisParDesLions.jpg"
&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Comme le démontre la justesse de cette meute de lions — ou est-ce un seul lion déployé à différents instants dans un effet cinématographique ?&lt;/p&gt;
&lt;p&gt;Reste cette complicité que nous pouvons avoir à apprécier aujourd&amp;rsquo;hui la représentation de notre environnement par des artistes si éloignés dans le temps et pourtant si proches dans leur sensibilité.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="lirraisonnable-efficacité-de-la-vision-1"&gt;&amp;ldquo;L&amp;rsquo;irraisonnable efficacité de la vision&amp;rdquo;&lt;/h2&gt;
&lt;figure id="figure-panneau-des-lions-grotte-chauvet--30-kahttpsfrwikipediaorgwikigrotte_chauvet"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/5/5b/18_PanneauDesLions%28PartieDroite%29BisonsPoursuivisParDesLions.jpg" alt="Panneau Des Lions [[Grotte chauvet, -30 kA]](https://fr.wikipedia.org/wiki/Grotte_Chauvet)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Panneau Des Lions &lt;a href="https://fr.wikipedia.org/wiki/Grotte_Chauvet" target="_blank" rel="noopener"&gt;[Grotte chauvet, -30 kA]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Je vous encourage à voir ces œuvres à Chauvet 2.&lt;/p&gt;
&lt;p&gt;Mais quel est ce sens, la vision ?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="lirraisonnable-efficacité-de-la-vision-2"&gt;&amp;ldquo;L&amp;rsquo;irraisonnable efficacité de la vision&amp;rdquo;&lt;/h2&gt;
&lt;figure id="figure-comment-la-vision-a-évolué-lp-2024-the-conversationhttpstheconversationcomchats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://images.theconversation.com/files/568221/original/file-20240108-17-78s0cj.png" alt="Comment la vision a évolué... [[LP, 2024, The Conversation]](https://theconversation.com/chats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083) " loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Comment la vision a évolué&amp;hellip; &lt;a href="https://theconversation.com/chats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083" target="_blank" rel="noopener"&gt;[LP, 2024, The Conversation]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;L&amp;rsquo;organe de notre vision, ce sont nos yeux, dont l&amp;rsquo;anatomie est la suivante.&lt;/p&gt;
&lt;p&gt;Premier miracle : de l&amp;rsquo;énergie lumineuse est transformée en un signal electro-chimique, la magie peut commencer.&lt;/p&gt;
&lt;p&gt;Je ne vais pas rentrer dans les détails — il faudrait des heures — mais explorons plutôt ce que nous appelons&amp;hellip;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Les illusions visuelles.&lt;/p&gt;
&lt;p&gt;Ici, nous avons une démonstration simple par Akiyoshi Kitaoka — entre sciences et art minimal — qui montre comment ce n&amp;rsquo;est pas un bug, mais une capacité du système.&lt;/p&gt;
&lt;p&gt;Le terme « illusion » est quelque peu impropre. C&amp;rsquo;est plutôt que la vision a la capacité de s&amp;rsquo;adapter au contexte, ici aux conditions d&amp;rsquo;éclairage changeantes, de la lumière de la lune — 1 candela — à celle du soleil — 100 000 candelas.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
La vision peut aussi jouer avec la géométrie de l&amp;rsquo;image. Prenons ces deux lignes parallèles — elles sont bien rigides.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Mais si nous les plaçons devant ce faisceau de lignes, alors elles apparaissent légèrement tordues.&lt;/p&gt;
&lt;p&gt;L&amp;rsquo;explication se trouverait dans le fait que nous interprétons l&amp;rsquo;image en 3D et que les distorsions que nous attendons rendent plus plausibles des lignes courbées.&lt;/p&gt;
&lt;p&gt;La vision montre là toute sa créativité à créer elle-même des illusions.&lt;/p&gt;
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Le cas de cette image est à ce titre remarquable.&lt;/p&gt;
&lt;p&gt;En 1976, la sonde Viking Orbiter a fotografía sous toutes les coutures la surface de Mars, que nous ne connaissions que par les télescopes. L&amp;rsquo;hypothèse de l&amp;rsquo;existence de canaux était née au début du siècle — et donc la possibilité d&amp;rsquo;une vie intelligente, les « Martiens ».&lt;/p&gt;
&lt;p&gt;Les résultats sont tombés : ils sont eux-mêmes sculptés dans la roche.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Trente ans plus tard, une nouvelle sonde a occulté la surface de Mars et fotografía le même terrain, révélant&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&amp;hellip; c&amp;rsquo;est juste un rocher !&lt;/p&gt;
&lt;p&gt;Ne le dites pas à Elon pour qu&amp;rsquo;il y aille sur Mars.&lt;/p&gt;
&lt;p&gt;Moralité : plus d&amp;rsquo;informations tuent les fake news.&lt;/p&gt;
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="la-perception-comme-processus-émergent"&gt;La perception comme processus émergent&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure id="figure-victor-vasarely-1971-gare-montparnassehttpsfrwikipediaorgwikivictor_vasarely"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://mcalp.fr/wp-content/uploads/2014/10/Gare-Montparnasse-10.jpg" alt="[Victor Vasarely (1971) Gare Montparnasse](https://fr.wikipedia.org/wiki/Victor_Vasarely)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Victor_Vasarely" target="_blank" rel="noopener"&gt;Victor Vasarely (1971) Gare Montparnasse&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;D&amp;rsquo;autres formes d&amp;rsquo;illusions visuelles jouent avec notre créativité visuelle. Vous pourriez penser à Escher — belle expo au Musée de la Monnaie — et on pense peut-être moins à Victor Vasarely, artiste plasticien d&amp;rsquo;origine hongroise, fondateur de l&amp;rsquo;Op Art.&lt;/p&gt;
&lt;p&gt;Dans l&amp;rsquo;espace public, il y a le logo de Renault, les publicités quand il n&amp;rsquo;y en a pas, la Gare Montparnasse. Très belle fondation à Aix.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="la-perception-comme-processus-émergent-1"&gt;La perception comme processus émergent&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-françois-morrelet-1962-mönchengladbach-sphère---trameshttpsfrwikipediaorgwikifrançois_morellet"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/c/c5/Mgmorellet.jpg" alt="[François Morrelet (1962) Mönchengladbach, Sphère - trames](https://fr.wikipedia.org/wiki/François_Morellet)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Fran%c3%a7ois_Morellet" target="_blank" rel="noopener"&gt;François Morrelet (1962) Mönchengladbach, Sphère - trames&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Un autre artiste de cette période est François Morellet — nous célébrons cette année les 100 ans de sa naissance. Ici une trame qui montre des alignements en bougeant. Art cinétique.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="la-perception-comme-processus-émergent-2"&gt;La perception comme processus émergent&lt;/h2&gt;
&lt;h2 id="hahahugoshortcode415s30hbhb"&gt;
&lt;figure id="figure-etienne-rey-trameshttpslaurentperrinetgithubiopost2018-04-10_trames"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2018-04-10_trames/featured.png" alt="[Etienne Rey, Trames](https://laurentperrinet.github.io/post/2018-04-10_trames/)" loading="lazy" data-zoomable width="72%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/post/2018-04-10_trames/" target="_blank" rel="noopener"&gt;Etienne Rey, Trames&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;C&amp;rsquo;est dans ce cadre que nous avons expérimenté avec Etienne Rey sur des trames, qui crée ces interférences. Émergence de nouvelles formes — hexagones comme utilisés à l&amp;rsquo;Alhambra — espaces tridimensionnels.&lt;/p&gt;
&lt;p&gt;Je reviendrai sur le fait que vous pouvez transformer l&amp;rsquo;image en bougeant les yeux.&lt;/p&gt;
&lt;/aside&gt;&lt;/h2&gt;
&lt;h2 id="la-perception-comme-processus-émergent-3"&gt;La perception comme processus émergent&lt;/h2&gt;
&lt;figure id="figure-etienne-rey-2025-variable-density-série-delaunayhttpslaurentperrinetgithubiopost2026-02-20_ososphere"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2026-02-20_ososphere/643545855_18444436261109562_1480440487903792518_n.jpg" alt="[Etienne Rey (2025) Variable Density, série Delaunay](https://laurentperrinet.github.io/post/2026-02-20_ososphere/)" loading="lazy" data-zoomable width="61.8%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/post/2026-02-20_ososphere/" target="_blank" rel="noopener"&gt;Etienne Rey (2025) Variable Density, série Delaunay&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Comment rassembler les pièces du puzzle ?&lt;/p&gt;
&lt;p&gt;Plus récemment, au festival Ososphère à Strasbourg, Delaunay : un assemblage de points optimisés pour couvrir au mieux le carré, mais avec une condition au bord. Limites perceptives.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2024-11-07_vibration-apparences/featured.jpg" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Pour faire le lien, Etienne a organisé une exposition au musée Granet — première pour de l&amp;rsquo;art contemporain. Elle reprend plusieurs des travaux issus de notre collaboration, dont l&amp;rsquo;affiche que je vais vous décrire.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences"&gt;La vibration des apparences&lt;/h2&gt;
&lt;figure id="figure-paul-cézanne-montagne-sainte-victoire-1904httpsenwikipediaorgwikipaul_cc3a9zanne"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/c/c9/Montagne_Sainte-Victoire%2C_par_Paul_C%C3%A9zanne_108.jpg" alt="[Paul Cézanne, Montagne Sainte-Victoire, 1904](https://en.wikipedia.org/wiki/Paul_C%C3%A9zanne)" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Paul_C%C3%A9zanne" target="_blank" rel="noopener"&gt;Paul Cézanne, Montagne Sainte-Victoire, 1904&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;le muset Granet est le musée de Cézanne&lt;/p&gt;
&lt;p&gt;le titre de l&amp;rsquo;exposition fait référence&amp;hellip;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences-1"&gt;La vibration des apparences&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-merleau-ponty-sens-et-non-senshttpslaurentperrinetgithubioauthoretienne-rey"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/Merleau-Ponty_Sens-et-non-sens.png" alt="[Merleau-Ponty, Sens et non-sens](https://laurentperrinet.github.io/author/etienne-rey/)" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/author/etienne-rey/" target="_blank" rel="noopener"&gt;Merleau-Ponty, Sens et non-sens&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&amp;hellip; à un texte de Merleau-Ponty et d&amp;rsquo;un passage sur Cézanne. La vibration, l&amp;rsquo;interférence entre couleurs qui rend la réalité. C&amp;rsquo;est une ligne de recherche que je vais vous illustrer par trois des œuvres présentées.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-video="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/visite_virtuelle.mp4"
data-background-video-loop="true"
data-background-video-muted="true"
&gt;
&lt;!--
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/video1.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;aside class="notes"&gt;
Mais commençons par une visite des deux salles de l&amp;rsquo;exposition.
&lt;/aside&gt;
&lt;hr&gt;
&lt;p&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://laurentperrinet.github.io/post/2019-06-22_ardemone/Avignon-02.jpg"
data-height="80%"
&gt;
&lt;aside class="notes"&gt;
La première est Densité Floue : des réseaux de Delaunay à haute entropie, mais superposés sur une plaque en verre 1 cm au-dessus. Effet de profondeur et de halo, de perspective dépendant du point de vue.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;
&lt;figure id="figure-etienne-rey--2019-horizon-faille---densité-flou---sans-gravité---une-poétique-de-lair-à-ardenome-avignon--httpswwwenrevenantdelexpocom"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2019-06-22_ardemone/Avignon-02.jpg" alt="Etienne Rey (2019) Horizon faille - Densité flou - Sans gravité - une poétique de l’air à Ardenome Avignon https://www.enrevenantdelexpo.com" loading="lazy" data-zoomable height="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Etienne Rey (2019) Horizon faille - Densité flou - Sans gravité - une poétique de l’air à Ardenome Avignon &lt;a href="https://www.enrevenantdelexpo.com" target="_blank" rel="noopener"&gt;https://www.enrevenantdelexpo.com&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Limite entre perçu et non perçu.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-video="https://github.com/NaturalPatterns/2020_caustiques/raw/main/iridiscence.mp4"
data-background-video-loop="true"
data-background-video-muted="true"
&gt;
&lt;!--
&lt;video controls &gt;
&lt;source src="https://github.com/NaturalPatterns/2020_caustiques/raw/main/iridiscence.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;aside class="notes"&gt;
Dans Caustiques, nous explorons la notion de forme par transformation. C&amp;rsquo;est une simulation de la réfraction. En piscine, avec un masque tuba, vous regardez le fond de l&amp;rsquo;eau. L&amp;rsquo;illumination uniforme donnée par le soleil génère de nouvelles formes — on note aussi ces iridescences — formes que nous pouvons faire évoluer entre ordre et chaos.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/2024-09-04_canaux_both.png"
data-height="80%"
&gt;
&lt;!--
&lt;figure id="figure-etienne-rey-la-vibration-des-apparenceshttpslaurentperrinetgithubiotalk2025-04-18-vibration-apparences"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/2024-09-04_canaux_both.png" alt="[Etienne Rey, La vibration des apparences](https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/)" loading="lazy" data-zoomable height="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/" target="_blank" rel="noopener"&gt;Etienne Rey, La vibration des apparences&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
Une œuvre centrale est celle-ci — notre affiche. Elle consiste en deux grilles polaires hexagonales, de deux couleurs, celles des supernovæ — oxygène et hydrogène.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences-2"&gt;La vibration des apparences&lt;/h2&gt;
&lt;figure id="figure-etienne-rey-2025-polairehttpslaurentperrinetgithubioauthoretienne-rey"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2026-01-19-art-and-science/featured.jpg" alt="[Etienne rey (2025) Polaire](https://laurentperrinet.github.io/author/etienne-rey/)" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/author/etienne-rey/" target="_blank" rel="noopener"&gt;Etienne rey (2025) Polaire&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
un zoom permet d&amp;rsquo;apprécier iterferences - moiré (mohair)
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences-3"&gt;La vibration des apparences&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;retino_grid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_phi&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;233&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;size_mag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ecc_max&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;power&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# https://laurentperrinet.github.io/sciblog/posts/2020-04-16-creating-an-hexagonal-grid.html&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;phi_v&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rho_v&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;meshgrid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;linspace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_phi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;False&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;linspace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ecc_max&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:],&lt;/span&gt; &lt;span class="n"&gt;sparse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indexing&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;xy&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;phi_v&lt;/span&gt;&lt;span class="p"&gt;[::&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;:]&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;N_phi&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;offsets&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;colors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;c1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;offset_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offsets&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;colors&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# convert to cartesian coordinates&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rho_v&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;phi_v&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;offset_&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rho_v&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cos&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;phi_v&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;R&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;size_mag&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;rho_v&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;power&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;N_rho&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ravel&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;Y&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ravel&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;R&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ravel&lt;/span&gt;&lt;span class="p"&gt;()):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;circle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set_source_rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;hue_to_rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fill&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;cr&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;c_blue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;240&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;opts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_phi&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_phi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.07&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;size_mag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ecc_max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;c1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;c_blue&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;c_blue&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="n"&gt;dc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nd"&gt;@disp&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;draw&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;cr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retino_grid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;aside class="notes"&gt;
Mon travail est prosaïquement de générer du code, dont voici une version. Pour les geeks : on crée une grille polaire déssinée en Cairo, puis on en déduit et on décale avec un offset en horizontal.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-video="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/2025-01-18_la-vibration-des-apparences.mp4"
data-background-video-loop="true"
data-background-video-muted="true"
&gt;
&lt;!-- ## La vibration des apparences
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/2025-01-18_la-vibration-des-apparences.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;aside class="notes"&gt;
On peut jouer avec ce décalage — expérimentation artistique. Une première réponse : « La vision, ça sert à mettre ensemble. » À créer quelque chose de nouveau : 1 + 1 = plus que 2. Mais à quoi ça sert ?
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="à-quoi-sert-la-vision-"&gt;À quoi sert la vision ?&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure id="figure-ilya-repin-1884-an-unexpected-visitorhttpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[Ilya Repin (1884) An Unexpected Visitor](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;Ilya Repin (1884) An Unexpected Visitor&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Nous avons vu que la vision est un processus qui essaie de faire du sens — même s&amp;rsquo;il n&amp;rsquo;y en a pas forcément. Ce processus assemble différentes parties ensemble. On connaît le processus « comment », mais on peut se poser la question « pourquoi » : à quoi ça sert, la vision ?&lt;/p&gt;
&lt;p&gt;C&amp;rsquo;est là qu&amp;rsquo;intervient Yarbus et cette peinture d&amp;rsquo;Ilya Repin : un soldat rentrant à la maison, tension liée à la surprise évoquée par le titre de la peinture.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision--1"&gt;À quoi sert la vision ?&lt;/h2&gt;
&lt;figure id="figure-yarbus-1965-an-unexpected-visitorhttpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[Yarbus (1965) An Unexpected Visitor](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;Yarbus (1965) An Unexpected Visitor&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Alfred_Iarbous" target="_blank" rel="noopener"&gt;https://fr.wikipedia.org/wiki/Alfred_Iarbous&lt;/a&gt;
Yarbus a réussi à&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;l&amp;rsquo;oeil bouge, est actif - dépend des espèces - des personnes&lt;/p&gt;
&lt;p&gt;traces structurées, même dans cette exploration libre&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision--2"&gt;À quoi sert la vision ?&lt;/h2&gt;
&lt;figure id="figure-yarbus-1965-an-unexpected-visitor-how-longhttpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_006.jpg" alt="[Yarbus (1965) An Unexpected Visitor *How long?*](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;Yarbus (1965) An Unexpected Visitor &lt;em&gt;How long?&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Ce qui est intéressant, c&amp;rsquo;est que l&amp;rsquo;on peut modifier cette structure en donnant un contexte. Si on pose la question « Depuis quand est-il parti ? », les mouvements oculaires changent&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision--3"&gt;À quoi sert la vision ?&lt;/h2&gt;
&lt;figure id="figure-yarbus-1965-an-unexpected-visitor---agehttpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_003.jpg" alt="[Yarbus (1965) An Unexpected Visitor - *Age?*](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;Yarbus (1965) An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Si on pose maintenant la question de l&amp;rsquo;âge des participants, la structure change encore. La preuve que nous sommes des animaux sociaux — une des fonctions principales de la vision est de trouver nos congénères et deviner leurs émotions.&lt;/p&gt;
&lt;p&gt;Mais pourquoi faire des saccades ? Si notre rétine était uniforme, nous n&amp;rsquo;en aurions pas besoin — c&amp;rsquo;est le cas des lapins ou des souris. Mais les primates, comme d&amp;rsquo;autres prédateurs, ont une vision qu&amp;rsquo;on dit fovéale : la densité de photorécepteurs est plus grande au centre de l&amp;rsquo;axe optique.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-video="http://laurentperrinet.github.io/talk/2025-12-12-main/50_fixation_sequence.mp4"
data-background-video-loop="true"
data-background-video-muted="true"
data-width="62%"
&gt;
&lt;!--
&lt;video autoplay loop &gt;
&lt;source src="http://laurentperrinet.github.io/talk/2025-12-12-main/50_fixation_sequence.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;aside class="notes"&gt;
C&amp;rsquo;est une capacité que nous essayons de comprendre au laboratoire grâce à des simulations numériques. Je montre ici une reconstruction de l&amp;rsquo;information lors d&amp;rsquo;un scan de l&amp;rsquo;image.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision--rétinotopie-fovéale"&gt;À quoi sert la vision : rétinotopie fovéale&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-jn-jérémie-e-daucé-et-lp-2026httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/retinotopy_primate.jpg" alt="[JN Jérémie, E Daucé et LP (2026)](https://laurentperrinet.github.io/publication/jeremie-25)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;JN Jérémie, E Daucé et LP (2026)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Cette simulation est basée sur une modélisation de cet espace rétinotopique. Physiologie chez le primate.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision--rétinotopie-fovéale-1"&gt;À quoi sert la vision : rétinotopie fovéale&lt;/h2&gt;
&lt;figure id="figure-jn-jérémie-e-daucé-et-lp-2026httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/retinotopy_dolphin.jpg" alt="[JN Jérémie, E Daucé et LP (2026)](https://laurentperrinet.github.io/publication/jeremie-25)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;JN Jérémie, E Daucé et LP (2026)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;à noter la grande diversité des rétinotopies - on montre ici des cartes de densités&lt;/p&gt;
&lt;p&gt;chez les dauphins on peut avoir une fovea, ou deux! (4 alors :-) )&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision--rétinotopie-fovéale-2"&gt;À quoi sert la vision : rétinotopie fovéale&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-jn-jérémie-e-daucé-et-lp-2026httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/retinotopy_hallucinations.jpg" alt="[JN Jérémie, E Daucé et LP (2026)](https://laurentperrinet.github.io/publication/jeremie-25)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;JN Jérémie, E Daucé et LP (2026)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Nous sommes aveugles à ce changement de précision. La rétinotopie se révèle lors de migraines ou sous l&amp;rsquo;effet de certaines drogues. Beau papier théorique.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision--rétinotopie-fovéale-3"&gt;À quoi sert la vision : rétinotopie fovéale&lt;/h2&gt;
&lt;figure id="figure-e-rey-et-lp-2026-formes--perceptionhttpslaurentperrinetgithubio2023-01-31_formes-et-perception"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2023-01-31_formes-et-perception/images/retinotopy.png" alt="[E Rey et LP (2026) Formes &amp; perception](https://laurentperrinet.github.io/2023-01-31_formes-et-perception/)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/2023-01-31_formes-et-perception/" target="_blank" rel="noopener"&gt;E Rey et LP (2026) Formes &amp;amp; perception&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Dans les modélisations&amp;hellip; Republication d&amp;rsquo;un article écrit pour le catalogue de l&amp;rsquo;exposition « Vasarely, d&amp;rsquo;un art programmatique au numérique » qui a eu lieu du 17 juin au 15 octobre 2023 à l&amp;rsquo;Espace Culturel Départemental Lympia de Nice.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision--rétinotopie-fovéale-4"&gt;À quoi sert la vision : rétinotopie fovéale&lt;/h2&gt;
&lt;figure id="figure-jn-jérémie-e-daucé-et-lp-2026httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/graphical.png" alt="[JN Jérémie, E Daucé et LP (2026)](https://laurentperrinet.github.io/publication/jeremie-25)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;JN Jérémie, E Daucé et LP (2026)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!--
&lt;figure id="figure-jn-jérémie-e-daucé-et-lp-2026httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/featured.jpg" alt="[JN Jérémie, E Daucé et LP (2026)](https://laurentperrinet.github.io/publication/jeremie-25)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;JN Jérémie, E Daucé et LP (2026)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
&lt;h2 id="rétinotopie-et-intelligence-artificielle"&gt;Rétinotopie et intelligence artificielle&lt;/h2&gt;
&lt;p&gt;On peut insérer ces images dans un réseau profond que nous venons de publier. Résultats : énergie, robustesse et localisation — apport des neurosciences. Un résultat qui nous intéresse ici est que la vision dépend de notre point de vue.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;p&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg"
&gt;
&lt;aside class="notes"&gt;
Illustré par cette illusion.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Encore une fois, Akiyoshi Kitaoka a frappé. Scientifique et vrai artiste. Je vous invite à visiter son site.
&lt;/aside&gt;&lt;/p&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="art--sciences-révèlent-la-vision-en-action"&gt;Art &amp;amp; Sciences révèlent la vision en action&lt;/h2&gt;
&lt;figure id="figure-etienne-rey-spectre-audiographiquehttpsondesparallelesorgprojetscloche-spectre-audiographique-diffraction"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://ondesparalleles.org/wp-content/uploads/2014/02/cloche_fiche_a.jpg" alt="[Etienne Rey, Spectre audiographique](https://ondesparalleles.org/projets/cloche-spectre-audiographique-diffraction/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/cloche-spectre-audiographique-diffraction/" target="_blank" rel="noopener"&gt;Etienne Rey, Spectre audiographique&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;donc la vision n&amp;rsquo;est pas un processus actif, mais un processus actif&lt;/p&gt;
&lt;p&gt;l&amp;rsquo;art nous le montre- dans cette sculture d&amp;rsquo;ER on peut dse déplacer&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="art--sciences-révèlent-la-vision-en-action-1"&gt;Art &amp;amp; Sciences révèlent la vision en action&lt;/h2&gt;
&lt;figure id="figure-carlos-cruz-diez-2013-chromosaturationhttpsfrwikipediaorgwikicarlos_cruz-diez"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/9/92/Cruz-Diez_2013_Grand_Palais_Paris_France.jpg" alt="[Carlos Cruz-Diez (2013) Chromosaturation](https://fr.wikipedia.org/wiki/Carlos_Cruz-Diez)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Carlos_Cruz-Diez" target="_blank" rel="noopener"&gt;Carlos Cruz-Diez (2013) Chromosaturation&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
C&amp;rsquo;est un thème récurrent dans l&amp;rsquo;art cinétique. Ici, Carlos Cruz-Diez — qui nous plonge&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-video="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/Varini.mp4"
data-background-video-loop="true"
data-background-video-muted="true"
&gt;
&lt;!-- ## La vision en action
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/Varini.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;h2 id="hahahugoshortcode415s90hbhb"&gt;&lt;aside class="notes"&gt;
Felice Varini.
&lt;/aside&gt;&lt;/h2&gt;
&lt;h2 id="art--sciences-révèlent-la-vision-en-action--tropique"&gt;Art &amp;amp; Sciences révèlent la vision en action : Tropique&lt;/h2&gt;
&lt;figure id="figure-etienne-rey-tropiquehttpsondesparallelesorgprojetstropique-7"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://ondesparalleles.org/wp-content/uploads/2014/02/tropique_fiche_b.jpg" alt="[Etienne Rey, Tropique](https://ondesparalleles.org/projets/tropique-7/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/tropique-7/" target="_blank" rel="noopener"&gt;Etienne Rey, Tropique&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Nous avons fait cette expérience sur notre première collaboration — Marseille, capitale de la culture 2013. Un vrai péplum : une salle remplie de gouttelettes microscopiques en suspension. Six vidéo projecteurs, douze Kinect, six Raspberry, des Arduino. Et du son.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="art--sciences-révèlent-la-vision-en-action--tropique-1"&gt;Art &amp;amp; Sciences révèlent la vision en action : Tropique&lt;/h2&gt;
&lt;iframe src="https://player.vimeo.com/video/66161665" width="640" height="360" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;aside class="notes"&gt;
Désolé de la qualité. Matérialité des lames de lumière.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="art--sciences-révèlent-la-vision-en-action--tropique-2"&gt;Art &amp;amp; Sciences révèlent la vision en action : Tropique&lt;/h2&gt;
&lt;iframe src="https://player.vimeo.com/video/56198653" width="640" height="360" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;aside class="notes"&gt;
Pourquoi les Kinects ? Interaction. Exteroceptif à introspectif. Vraie expérience hallucinatoire.
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-trame-élasticitéhttpsondesparallelesorgprojetstrame-elasticite-vasarely"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2016-06-02_elasticite/TRAME_Elasticit%c3%a9.jpg" alt="[Etienne Rey, TRAME ÉLASTICITÉ](https://ondesparalleles.org/projets/trame-elasticite-vasarely/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/trame-elasticite-vasarely/" target="_blank" rel="noopener"&gt;Etienne Rey, TRAME ÉLASTICITÉ&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Autre collaboration de taille:&lt;/p&gt;
&lt;p&gt;DIMENSIONS : 3 M DE HAUT 5 M DE LARGE
INOX POLI MIROIR / ALUMINIUM / ACIER / MOTEURS / PROGRAMME TEMPS RÉEL
À la Fondation Vasarely à Aix-en-Provence, Etienne Rey a choisi d’installer dans la salle des Intégrations architectoniques un ballet visuel hypnotique.
Composé d’une succession de lames de miroirs, verticales et rotatives, l’installation Trame se joue des reflets et de la démultiplication de l’espace, offrant au spectateur une multiplicité de points de vue dans lesquels il peut se perdre à loisir. Par un effet de « porosité » recherché par l’artiste, le dispositif dialogue intensément avec les Intégrations.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="art--sciences-révèlent-la-vision-en-action--trame-élasticité"&gt;Art &amp;amp; Sciences révèlent la vision en action : TRAME ÉLASTICITÉ&lt;/h2&gt;
&lt;iframe src="https://player.vimeo.com/video/198189587" width="640" height="360" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;aside class="notes"&gt;
Piège à Instagram. Cohérence à incohérence. Une nouvelle matière.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-video="https://laurentperrinet.github.io/sciblog/files/2025-04-24-orienting-yourself-in-the-visual-flow.mp4"
data-background-video-loop="true"
data-background-video-muted="true"
&gt;
&lt;!--
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2025-04-24-orienting-yourself-in-the-visual-flow.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;aside class="notes"&gt;
Pour maintenant illustrer comment cette exploration peut avoir un intérêt en neurosciences&amp;hellip; nous pouvons créer des stimulations visuelles qui simulent&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-video="https://laurentperrinet.github.io/sciblog/files/2025-04-24-orienting-yourself-in-the-visual-flow-perturb.mp4"
data-background-video-loop="true"
data-background-video-muted="true"
&gt;
&lt;!-- ---
## La vision en action
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2025-04-24-orienting-yourself-in-the-visual-flow-perturb.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;aside class="notes"&gt;
Mais aussi créer des perturbations qui forcent une adaptation posturale ou des mouvements d&amp;rsquo;yeux.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="art--sciences-révèlent-la-vision-en-action-2"&gt;Art &amp;amp; Sciences révèlent la vision en action&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-ede-rancz-role-of-neuromodulators-in-active-perceptionhttpslaurentperrinetgithubioauthorede-rancz"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/author/ede-rancz/rancz_lite.png" alt="[Ede Rancz, Role of neuromodulators in active perception](https://laurentperrinet.github.io/author/ede-rancz/)" loading="lazy" data-zoomable height="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/author/ede-rancz/" target="_blank" rel="noopener"&gt;Ede Rancz, Role of neuromodulators in active perception&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Dans un environnement virtuel, perturbations visuelles ou motrices, nécessité du contrôle de la balance entre vision et proprioception.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="art--sciences-révèlent-la-vision-en-action-3"&gt;Art &amp;amp; Sciences révèlent la vision en action&lt;/h2&gt;
&lt;p&gt;
&lt;figure id="figure-ede-rancz-role-of-neuromodulators-in-active-perceptionhttpslaurentperrinetgithubioauthorede-rancz"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/author/ede-rancz/rancz_free.png" alt="[Ede Rancz, Role of neuromodulators in active perception](https://laurentperrinet.github.io/author/ede-rancz/)" loading="lazy" data-zoomable height="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/author/ede-rancz/" target="_blank" rel="noopener"&gt;Ede Rancz, Role of neuromodulators in active perception&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Rôle des modulateurs — schizophrénie. On a eu la bourse Arthur-Bertin.
&lt;/aside&gt;&lt;/p&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="l-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-04-11-intelligence-du-regard/?transition=fade" target="_blank" rel="noopener"&gt;L&amp;rsquo;intelligence du regard&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-04-11-intelligence-du-regard/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="forum-des-sciences-cognitives-2026-1"&gt;&lt;u&gt;&lt;a href="https://cognivence.scicog.fr/forum-des-sciences-cognitives/" target="_blank" rel="noopener"&gt;Forum des Sciences Cognitives 2026&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-04-11-1"&gt;[2026-04-11]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;a href="https://laurentperrinet.github.io/project/art-science/" target="_blank" rel="noopener"&gt;Art-Sciences&lt;/a&gt; /
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt; --&gt;
&lt;aside class="notes"&gt;
&lt;h2 id="pour-résumer"&gt;Pour résumer&lt;/h2&gt;
&lt;p&gt;La vision est magique.&lt;/p&gt;
&lt;p&gt;L&amp;rsquo;art peut en révéler la diversité.&lt;/p&gt;
&lt;p&gt;L&amp;rsquo;intelligence du regard est dans son incarnation — cognition incarnée, Varela.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="diapositives-supplémentaires"&gt;Diapositives supplémentaires&lt;/h1&gt;
&lt;hr&gt;
&lt;h2 id="victor-vasarely"&gt;Victor Vasarely&lt;/h2&gt;
&lt;figure id="figure-victor-vasarely-1962-mönchengladbach-sphère---trameshttpsfrwikipediaorgwikivictor_vasarely"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnqT-ltEk7fE-iUfHgea6HPeusGiz357ctHroJoxUxy02oXJ4U8EGbWoXPz0aEaTOtKQKNBCJ9IMsXMKBpS9ngmwWsAESV8Rrto9iM3mCBaYmRj6MiQqpyGy-uzomgMHtdXxE6QNwBqr8/s1600/fds.jpg" alt="[Victor Vasarely (1962) Mönchengladbach, Sphère - trames](https://fr.wikipedia.org/wiki/Victor_Vasarely)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Victor_Vasarely" target="_blank" rel="noopener"&gt;Victor Vasarely (1962) Mönchengladbach, Sphère - trames&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="victor-vasarely-1"&gt;Victor Vasarely&lt;/h2&gt;
&lt;figure id="figure-victor-vasarely-1977outdoor-vasarely-artwork-at-the-church-of-pálos-in-pécshttpsfrwikipediaorgwikivictor_vasarely"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/1/14/Hungary_pecs_-_vasarely0.jpg" alt="[Victor Vasarely (1977)Outdoor Vasarely artwork at the church of Pálos in Pécs.](https://fr.wikipedia.org/wiki/Victor_Vasarely)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Victor_Vasarely" target="_blank" rel="noopener"&gt;Victor Vasarely (1977)Outdoor Vasarely artwork at the church of Pálos in Pécs.&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="victor-vasarely-2"&gt;Victor Vasarely&lt;/h2&gt;
&lt;figure id="figure-victor-vasarely-1962-supernovaehttpsfrwikipediaorgwikivictor_vasarely"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUqb-w6zGJ8ul1sTnh0gXi2PWwDC4uNM0Ctj_XNerPS-BuJR6_ZGNsNWO8fv5fl3S5is8faHPrgSsD1f7_KR8JDxbaYlDFJQ9ZMmRQ5S1LzBxgq-qA3vDb-_spbICOqtVNExc2bHdIiNM/s320/Supernovae.jpg" alt="[Victor Vasarely (1962) Supernovae](https://fr.wikipedia.org/wiki/Victor_Vasarely)" loading="lazy" data-zoomable width="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Victor_Vasarely" target="_blank" rel="noopener"&gt;Victor Vasarely (1962) Supernovae&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="victor-vasarely-3"&gt;Victor Vasarely&lt;/h2&gt;
&lt;figure id="figure-victor-vasarely-1976-fondation-vasarely-aix-en-provencehttpsfrwikipediaorgwikifondation_vasarely"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_LmwCk716jFOMR8cAwmX96DUlrCFEGfgwJVp4SaDvk9AmlGiXA25N9-DViXxGW9zHE2AHBLxo1dVuHUg9TvRn2yEsmt-i_vvNX_h9rBqnnOFVqFUCbnXIPWVWtkv_tqKGcHRMzX4wUJQ/s1600/800px-FondationAix.JPG" alt="[Victor Vasarely (1976) Fondation Vasarely, Aix-en-Provence](https://fr.wikipedia.org/wiki/Fondation_Vasarely)" loading="lazy" data-zoomable width="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Fondation_Vasarely" target="_blank" rel="noopener"&gt;Victor Vasarely (1976) Fondation Vasarely, Aix-en-Provence&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-cristal-n2httpsondesparallelesorgprojetscristal-n2__trashed"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://ondesparalleles.org/wp-content/uploads/2014/04/etienne_rey_horizons_variables_news2.jpg" alt="[Etienne Rey, Cristal n2](https://ondesparalleles.org/projets/cristal-n2__trashed/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/cristal-n2__trashed/" target="_blank" rel="noopener"&gt;Etienne Rey, Cristal n2&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;!--
## La vibration des apparences --&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-video="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/video1.mp4"
data-background-video-loop="true"
data-background-video-muted="true"
&gt;
&lt;!--
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/video1.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2026-03-05-ue-natural-cognition</title><link>https://laurentperrinet.github.io/slides/2026-03-05-ue-natural-cognition/</link><pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2026-03-05-ue-natural-cognition/</guid><description>&lt;section&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-03-05-ue-natural-cognition/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;h3 id="-master-1-neuroscience-ue-natural-cognition-artificial-cognition"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-03-05-ue-natural-cognition/" target="_blank" rel="noopener"&gt;[2026-03-05]&lt;/a&gt; &lt;a href="https://sciences.univ-amu.fr/fr/formation/masters/master-neurosciences" target="_blank" rel="noopener"&gt;Master 1 Neuroscience, UE Natural Cognition, Artificial Cognition&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;outline =&lt;/li&gt;
&lt;li&gt;fact: paradoxically vision is a complex process for the simplest function&lt;/li&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg"
&gt;
&lt;!-- &lt;img src="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg" width="80%"/&gt; --&gt;
&lt;aside class="notes"&gt;
Paysage catalan (Le Chasseur)
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="principles-of-vision"&gt;Principles of Vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cut in different levels: Marr (+ Poggio)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;arbitrary, but useful division of labor= computational / algorithm / hardware&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dynamics (computational)&lt;/li&gt;
&lt;li&gt;CNNs (hardware)&lt;/li&gt;
&lt;li&gt;spiking (algorithm)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First: What is the function of vision?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision"&gt;What is the function of vision?&lt;/h2&gt;
&lt;p&gt;
&lt;video autoplay loop &gt;
&lt;source src="http://laurentperrinet.github.io/talk/2025-12-12-main/50_fixation_sequence.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-1"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-ilya-repin-1884httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[An Unexpected Visitor (Ilya Repin, 1884)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Ilya Repin, 1884)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;seeing= interacting with the visual world&lt;/li&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-2"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[An Unexpected Visitor (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: the eye is always moving&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Alfred_Iarbous" target="_blank" rel="noopener"&gt;https://fr.wikipedia.org/wiki/Alfred_Iarbous&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;consistency of eye traces&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-3"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---age-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_003.jpg" alt="[An Unexpected Visitor - *Age?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-4"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---how-long-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_006.jpg" alt="[An Unexpected Visitor - *How long?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;How long?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: depends on task&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Visual illusions are a great way to understand the constraints of vision&lt;/li&gt;
&lt;li&gt;notce that here the illusion depend on your eye movements&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a simpler one showing effect of context&lt;/li&gt;
&lt;li&gt;here the ever changing lighting conditions from moonlight (1 candela) to sunlight (100 000 candela)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;the process of inverting the reason of an illusion can be intriguing&lt;/li&gt;
&lt;li&gt;hering: two parallel lines&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;appear bent&lt;/li&gt;
&lt;li&gt;effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;more generally it reveals vision generates a model of the world&lt;/li&gt;
&lt;li&gt;pareidolia: seeing faces in clouds, or a man on mars&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;30 years later&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip; it&amp;rsquo;s just a rock&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="principles-of-vision-1"&gt;Principles of vision?&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we know more about the function&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;let&amp;rsquo;s delve into a computational theory of vision&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="computational-neuroscience-of-vision-1"&gt;Computational neuroscience of vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it&amp;rsquo;s a multi-scale, complex model&amp;hellip;&lt;/li&gt;
&lt;li&gt;perhaps we will never be able to comprehend it in full&lt;/li&gt;
&lt;li&gt;words are not precise enough, let&amp;rsquo;s use mathematics and models to describe this system&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-human-visual-system"&gt;Anatomy of the Human Visual system&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s start with the anatomy&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="human-visual-system--the-hmax-model"&gt;Human Visual system : the HMAX model&lt;/h2&gt;
&lt;!--
&lt;figure id="figure-serre-and-poggio-2007"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.researchgate.net/profile/Thomas-Serre/publication/253467382/figure/fig1/AS:298143448092675@1448094345807/a-Organization-of-the-visual-cortex-The-diagram-is-modified-from-Gross-1998-Key.png" alt="[Serre and Poggio, 2007]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Serre and Poggio, 2007]
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;figure id="figure-serre-and-poggio-2007"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2020-04-ue-neurosciences-computationnelles/a-Organization-of-the-visual-cortex-The-diagram-is-modified-from-Gross-1998-Key.png" alt="[Serre and Poggio, 2007]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Serre and Poggio, 2007]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and a model of it&amp;hellip;(&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;CNN, the mother of all deep learning models&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex"&gt;Primary visual cortex&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s zoom in, the basic ingredient is the receptive field&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-1"&gt;Primary visual cortex&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="hybrid-ia-models"&gt;Hybrid IA models&lt;/h2&gt;
&lt;figure id="figure-using-goal-driven-deep-learning-models-to-understand-sensory-cortex-yamins--dicarlo-2016"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://knu-brainai.github.io/images/cnn.png" alt="Using goal-driven deep learning models to understand sensory cortex [Yamins &amp; DiCarlo, 2016] " loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Using goal-driven deep learning models to understand sensory cortex [Yamins &amp;amp; DiCarlo, 2016]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="principles-of-vision-2"&gt;Principles of vision?&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we know more about the function&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="convolutional-neural-nets-cnn"&gt;Convolutional Neural Nets (CNN)&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cut in different levels: Marr (+ Poggio)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;arbitrary, but useful division of labor= computational / algorithm / hardware&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dynamics (computational)&lt;/li&gt;
&lt;li&gt;CNNs (hardware)&lt;/li&gt;
&lt;li&gt;spiking (algorithm)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First: What is the function of vision?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-nets-cnn-1"&gt;Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;this can be integrated in a hierarchy&amp;hellip;&lt;/li&gt;
&lt;li&gt;defining a Convolutional Neural Networks (CNN)&lt;/li&gt;
&lt;li&gt;one layer is a convolution&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-nets-cnn-2"&gt;Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;One-dimensional &lt;a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" target="_blank" rel="noopener"&gt;discrete convolution&lt;/a&gt; (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and be formalized as a convolution&amp;hellip;&lt;/li&gt;
&lt;li&gt;but what is a convolution?&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s start in 1D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-1"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now in 2D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-2"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cross-correlation&lt;/strong&gt; of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;note the difference between convolutions and cross-correlation&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-3"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it is a translation-invariant feature detector&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-4"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of an image defined on several channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{c,i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we can add different channels to the image (eg colors)&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-5"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of a multi-channel image for multiple output channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[k, x, y] = \sum_{c,i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now we get to the full CNN&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-the-hmax-model"&gt;CNN: the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-challenges"&gt;CNN: challenges&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="principles-of-vision-3"&gt;Principles of Vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cut in different levels: Marr (+ Poggio)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;arbitrary, but useful division of labor= computational / algorithm / hardware&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dynamics (computational)&lt;/li&gt;
&lt;li&gt;CNNs (hardware)&lt;/li&gt;
&lt;li&gt;spiking (algorithm)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First: What is the function of vision?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-predictive-processing"&gt;CNN: Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-predictive-processing-1"&gt;CNN: Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-topography"&gt;CNN: Topography&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;topography?&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-topography-1"&gt;CNN: Topography&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= bio-mimetism&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;!--
---
# Computational neuroscience of vision
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;neuroAI&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
&lt;section&gt;
# Dynamics of vision
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;another important missing feature: time&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-visual-latencies-grimaldi-et-al-2022httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency_bg.jpg" alt="Visual latencies [[Grimaldi *et al*, 2022]](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In particular in our group, we are interested in dynamics of neural processing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the process of categorizing an object takes 10 layers&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston, 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
Flash-lag effect: MBP ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
# Dynamics of vision
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
---
&lt;section&gt;
# Spiking Neural Networks (SNN)
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN: Leaky Integrate-and-Fire Neuron
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN in neurobiology
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN in neurobiology
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN in neurobiology
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN in neurobiology
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Izhikevich polychronization&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;yet the domain is vast, and there s lot to do in SNNs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN: Spiking motifs
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN: Spiking motifs
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN: Spiking motifs
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN in neuromorphic engineering
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;event-based cameras&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN in neuromorphic engineering
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN in neuromorphic engineering
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
---
## SNN in neuromorphic engineering
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nice kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## SNN in neuromorphic engineering
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
# Spiking Neural Networks (SNN)
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
--&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="artificial-neural-networks-applied-to-the-understanding-of-biological-vision"&gt;Artificial neural networks applied to the understanding of biological vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-03-05-ue-natural-cognition/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;h3 id="-master-1-neuroscience-ue-natural-cognition-artificial-cognition-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-03-05-ue-natural-cognition/" target="_blank" rel="noopener"&gt;[2026-03-05]&lt;/a&gt; &lt;a href="https://sciences.univ-amu.fr/fr/formation/masters/master-neurosciences" target="_blank" rel="noopener"&gt;Master 1 Neuroscience, UE Natural Cognition, Artificial Cognition&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&lt;/li&gt;
&lt;li&gt;outline&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2026-02-10-biomplus</title><link>https://laurentperrinet.github.io/slides/2026-02-10-biomplus/</link><pubDate>Tue, 10 Feb 2026 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2026-02-10-biomplus/</guid><description>&lt;section&gt;
&lt;h1 id="recréer-des-réseaux-neuronaux-pour-améliorer-la-compréhension-de-notre-cerveau"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-02-10-biomplus/?transition=fade" target="_blank" rel="noopener"&gt;Recréer des réseaux neuronaux pour améliorer la compréhension de notre cerveau&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-02-10-biomplus/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="webinaire-biome-"&gt;&lt;u&gt;&lt;a href="https://teams.microsoft.com/dl/launcher/launcher.html?url=%2F_%23%2Fl%2Fmeetup-join%2F19%3Ameeting_YmM1YzRjMzgtZjRkMS00Y2ZkLThjNzEtYjQxNzZjNTlmNjY5%40thread.v2%2F0%3Fcontext%3D%257b%2522Tid%2522%253a%252276cdcfb4-15ec-4c24-a75c-bf51a16064f7%2522%252c%2522Oid%2522%253a%2522c629c390-dfc8-481e-852a-c6a25629ade1%2522%257d%26anon%3Dtrue&amp;amp;type=meetup-join&amp;amp;deeplinkId=886c26ca-3923-484d-9ebf-4c5aad182080&amp;amp;directDl=true&amp;amp;msLaunch=true&amp;amp;enableMobilePage=true&amp;amp;suppressPrompt=true" target="_blank" rel="noopener"&gt;Webinaire Biome+ [Biomimétisme &amp;amp; Neurosciences]&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-02-03"&gt;[2026-02-03]&lt;/h3&gt;
&lt;table width="100%"&gt;
&lt;tr&gt;
&lt;th width="60%"&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" width="100%" &gt;
&lt;th width="30%"&gt;
&lt;img src="https://conect-int.github.io/slides/conect/CONECT-logo.png" width="100%" &gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;outline =&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning / warning not network sparsity&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s sparse!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;in practice: sparse coding in a nutshell&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;perspective: convolutional sparse coding&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;url_code = &lt;a href="https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience" target="_blank" rel="noopener"&gt;https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Not only the speaker can read these notes, Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;url?print-pdf http://localhost:8000/?print-pdf&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="attention-in-vision-transformers-and-in-natural-vision"&gt;Attention in Vision Transformers and in Natural Vision&lt;/h2&gt;
&lt;figure id="figure-saccade-selection-method-matthis-dallainhttpslaurentperrinetgithubioauthormatthis-dallain-with-the-edge-team--leat-laboratoryhttpsleatuniv-cotedazurfr"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/dallain-26/saccade_selection.jpg" alt="Saccade selection method. [Matthis Dallain](https://laurentperrinet.github.io/author/matthis-dallain/) with the [EDGE Team @ LEAT Laboratory](https://leat.univ-cotedazur.fr/)" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Saccade selection method. &lt;a href="https://laurentperrinet.github.io/author/matthis-dallain/" target="_blank" rel="noopener"&gt;Matthis Dallain&lt;/a&gt; with the &lt;a href="https://leat.univ-cotedazur.fr/" target="_blank" rel="noopener"&gt;EDGE Team @ LEAT Laboratory&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
One example of attention maps is shown in the figure above
&lt;/aside&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="http://laurentperrinet.github.io/talk/2025-12-12-main/50_fixation_sequence.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_1.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look-1"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_2.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look-2"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_3.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look-3"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_4.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look-4"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_5.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-1"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-2"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="recréer-des-réseaux-neuronaux-pour-améliorer-la-compréhension-de-notre-cerveau-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-02-10-biomplus/?transition=fade" target="_blank" rel="noopener"&gt;Recréer des réseaux neuronaux pour améliorer la compréhension de notre cerveau&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-02-10-biomplus/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="webinaire-biome--1"&gt;&lt;u&gt;&lt;a href="https://teams.microsoft.com/dl/launcher/launcher.html?url=%2F_%23%2Fl%2Fmeetup-join%2F19%3Ameeting_YmM1YzRjMzgtZjRkMS00Y2ZkLThjNzEtYjQxNzZjNTlmNjY5%40thread.v2%2F0%3Fcontext%3D%257b%2522Tid%2522%253a%252276cdcfb4-15ec-4c24-a75c-bf51a16064f7%2522%252c%2522Oid%2522%253a%2522c629c390-dfc8-481e-852a-c6a25629ade1%2522%257d%26anon%3Dtrue&amp;amp;type=meetup-join&amp;amp;deeplinkId=886c26ca-3923-484d-9ebf-4c5aad182080&amp;amp;directDl=true&amp;amp;msLaunch=true&amp;amp;enableMobilePage=true&amp;amp;suppressPrompt=true" target="_blank" rel="noopener"&gt;Webinaire Biome+ [Biomimétisme &amp;amp; Neurosciences]&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-02-03-1"&gt;[2026-02-03]&lt;/h3&gt;
&lt;table width="100%"&gt;
&lt;tr&gt;
&lt;th width="60%"&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" width="100%" &gt;
&lt;th width="30%"&gt;
&lt;img src="https://conect-int.github.io/slides/conect/CONECT-logo.png" width="100%" &gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;/section&gt;</description></item><item><title>2026-02-03-ai-and-neuroscience-day</title><link>https://laurentperrinet.github.io/slides/2026-02-03-ai-and-neuroscience-day/</link><pubDate>Tue, 03 Feb 2026 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2026-02-03-ai-and-neuroscience-day/</guid><description>&lt;h1 id="neuroscience--ai-energy-efficient-visual-processing-algorithms"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-02-03-ai-and-neuroscience-day/?transition=fade" target="_blank" rel="noopener"&gt;Neuroscience &amp;amp; AI: Energy-efficient visual processing algorithms&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-02-03-ai-and-neuroscience-day/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="journée"&gt;&lt;u&gt;&lt;a href="https://neuro-marseille.org/en/events/workshop-on-artificial-intelligence-in-neuroscience-projects-tools-and-perspectives/" target="_blank" rel="noopener"&gt;Journée &lt;em&gt;Neurosciences et IA / IA et Neurosciences&lt;/em&gt; de NeuroMarseille&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-02-03"&gt;[2026-02-03]&lt;/h3&gt;
&lt;table width="100%"&gt;
&lt;tr&gt;
&lt;th width="60%"&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" width="100%" &gt;
&lt;th width="30%"&gt;
&lt;img src="https://conect-int.github.io/slides/conect/CONECT-logo.png" width="100%" &gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;outline =&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning / warning not network sparsity&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s sparse!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;in practice: sparse coding in a nutshell&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;perspective: convolutional sparse coding&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;url_code = &lt;a href="https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience" target="_blank" rel="noopener"&gt;https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Not only the speaker can read these notes, Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;url?print-pdf http://localhost:8000/?print-pdf&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-1"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-2"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="attention-in-vision-transformers-and-in-natural-vision"&gt;Attention in Vision Transformers and in Natural Vision&lt;/h2&gt;
&lt;figure id="figure-saccade-selection-method-matthis-dallainhttpslaurentperrinetgithubioauthormatthis-dallain-with-the-edge-team--leat-laboratoryhttpsleatuniv-cotedazurfr"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/dallain-26/saccade_selection.jpg" alt="Saccade selection method. [Matthis Dallain](https://laurentperrinet.github.io/author/matthis-dallain/) with the [EDGE Team @ LEAT Laboratory](https://leat.univ-cotedazur.fr/)" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Saccade selection method. &lt;a href="https://laurentperrinet.github.io/author/matthis-dallain/" target="_blank" rel="noopener"&gt;Matthis Dallain&lt;/a&gt; with the &lt;a href="https://leat.univ-cotedazur.fr/" target="_blank" rel="noopener"&gt;EDGE Team @ LEAT Laboratory&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
One example of attention maps is shown in the figure above
&lt;/aside&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="http://laurentperrinet.github.io/talk/2025-12-12-main/50_fixation_sequence.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_1.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look-1"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_2.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look-2"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_3.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look-3"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_4.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="learning-where-to-look-4"&gt;Learning where to look&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-12-12-main/where_5.jpg" alt="" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
More generally,
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="neuroscience--ai-energy-efficient-visual-processing-algorithms-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-02-03-ai-and-neuroscience-day/?transition=fade" target="_blank" rel="noopener"&gt;Neuroscience &amp;amp; AI: Energy-efficient visual processing algorithms&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-02-03-ai-and-neuroscience-day/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="journée-1"&gt;&lt;u&gt;&lt;a href="https://neuro-marseille.org/en/events/workshop-on-artificial-intelligence-in-neuroscience-projects-tools-and-perspectives/" target="_blank" rel="noopener"&gt;Journée &lt;em&gt;Neurosciences et IA / IA et Neurosciences&lt;/em&gt; de NeuroMarseille&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-02-03-1"&gt;[2026-02-03]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;</description></item><item><title>2026-01-29-emergences</title><link>https://laurentperrinet.github.io/slides/2026-01-29-emergences/</link><pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2026-01-29-emergences/</guid><description>&lt;section&gt;
&lt;h1 id="neurosciences-and-sparsity"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-01-29-emergences/?transition=fade" target="_blank" rel="noopener"&gt;Neurosciences and sparsity&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-01-29-emergences/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="séminaire-à-l"&gt;&lt;u&gt;&lt;a href="https://www.pepr-ia.fr" target="_blank" rel="noopener"&gt;&lt;em&gt;Séminaire à l&amp;rsquo;atelier &amp;ldquo;IA embarquée&amp;rdquo; du PEPR IA&lt;/em&gt;&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-01-29"&gt;[2026-01-29]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt; --&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;outline =&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning / warning not network sparsity&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s sparse!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;in practice: sparse coding in a nutshell&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;perspective: convolutional sparse coding&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;url_code = &lt;a href="https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience" target="_blank" rel="noopener"&gt;https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Not only the speaker can read these notes, Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;url?print-pdf http://localhost:8000/?print-pdf&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="sparse-representations-in-computer-vision"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2015-05-22-a-hitchhiker-guide-to-matching-pursuit/MPtutorial_rec.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Code @ &lt;a href="https://laurentperrinet.github.io/sciblog/posts/2015-05-22-a-hitchhiker-guide-to-matching-pursuit.html" target="_blank" rel="noopener"&gt;A hitchhiker guide to Matching Pursuit&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;the whole is the sum of a few parts&lt;/p&gt;
&lt;p&gt;Sparse coding is a technique used in signal processing and machine learning to represent data in a more concise and efficient manner. It aims to find a sparse representation of the data, which means representing the data with only a small number of non-zero coefficients or activations. In sparse coding, a set of basis functions or atoms is typically defined, and the goal is to find a linear combination of these atoms that best represents the input data. The coefficients of this linear combination are often constrained to be sparse, meaning that only a few of them are allowed to be non-zero.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-a-survey"&gt;Neurosciences and sparsity: a survey&lt;/h2&gt;
&lt;!-- &lt;iframe allowfullscreen frameborder="0" height="100%" mozallowfullscreen style="min-width: 500px; min-height: 355px" src="https://app.wooclap.com/events/HLEQUP/questions/697a765837a5e7d1b8a8eefe" width="100%"&gt;&lt;/iframe&gt;
--&gt;
&lt;ul&gt;
&lt;li&gt;Go to wooclap.com&lt;/li&gt;
&lt;li&gt;Enter the code HLEQUP&lt;/li&gt;
&lt;li&gt;Or directly follow &lt;a href="https://app.wooclap.com/HLEQUP?from=instruction-slide" target="_blank" rel="noopener"&gt;https://app.wooclap.com/HLEQUP?from=instruction-slide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
Time for a wooclap
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-a-survey-1"&gt;Neurosciences and sparsity: a survey&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2026-01-29-emergences/wooclap_1.png" alt="" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-a-survey-2"&gt;Neurosciences and sparsity: a survey&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2026-01-29-emergences/wooclap_2.png" alt="" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-a-survey-3"&gt;Neurosciences and sparsity: a survey&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2026-01-29-emergences/wooclap_3.png" alt="" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-a-survey-4"&gt;Neurosciences and sparsity: a survey&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2026-01-29-emergences/wooclap_4.png" alt="" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-a-survey-5"&gt;Neurosciences and sparsity: a survey&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2026-01-29-emergences/wooclap_5.png" alt="" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-1"&gt;Neurosciences and sparsity&lt;/h2&gt;
&lt;figure id="figure-lennie-2003-the-cost-of-cortical-computationhttpsneuromatchsociallaurentperrinet114427859025152015"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://media.neuromatch.social/media_attachments/files/114/427/857/683/632/363/original/a3b375df340a54aa.png" alt="[[Lennie, 2003, The Cost of Cortical Computation](https://neuromatch.social/@laurentperrinet/114427859025152015)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://neuromatch.social/@laurentperrinet/114427859025152015" target="_blank" rel="noopener"&gt;Lennie, 2003, The Cost of Cortical Computation&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Starting with the brain&amp;rsquo;s known energy consumption (approximately 20% of the body&amp;rsquo;s entire energy budget despite being only 2% of body weight), Lennie worked backward to determine how many action potentials this energy could reasonably support.&lt;/p&gt;
&lt;p&gt;By synthesizing these factors and dividing the available energy budget by the number of neurons and the energy cost per spike, Lennie calculated that cortical neurons can only sustain an average firing rate of approximately 0.16 Hz while remaining within the brain&amp;rsquo;s metabolic constraints.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-2"&gt;Neurosciences and sparsity&lt;/h2&gt;
&lt;figure id="figure-brunel-2001httpsbooksgooglefrbookshlfrlridb8wodqwdtsscoifndpgpa307otsknhqrj-tszsig0wi2cq2rnmxc7fvtyjoewzedlcgredir_escyvonepageqffalse"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Brunel200Fig2.png" alt="[[Brunel, 2001](https://books.google.fr/books?hl=fr&amp;lr=&amp;id=b8woDqWdTssC&amp;oi=fnd&amp;pg=PA307&amp;ots=KNHQrJ-TsZ&amp;sig=0WI2cq2RnMXC7fVTyjOEWZEdlCg&amp;redir_esc=y#v=onepage&amp;q&amp;f=false)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://books.google.fr/books?hl=fr&amp;amp;lr=&amp;amp;id=b8woDqWdTssC&amp;amp;oi=fnd&amp;amp;pg=PA307&amp;amp;ots=KNHQrJ-TsZ&amp;amp;sig=0WI2cq2RnMXC7fVTyjOEWZEdlCg&amp;amp;redir_esc=y#v=onepage&amp;amp;q&amp;amp;f=false" target="_blank" rel="noopener"&gt;Brunel, 2001&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Phase diagrams of sparsely connected networks of excitatory and inhibitory spiking neurons&lt;/p&gt;
&lt;p&gt;healthy network = 1Hz = sparse activity (stronger in auditory, in insects, &amp;hellip;)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-3"&gt;Neurosciences and sparsity&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-4"&gt;Neurosciences and sparsity&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/fncir-10-00037-g001a.jpg" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="neurosciences-and-sparsity-5"&gt;Neurosciences and sparsity&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/fncir-10-00037-g001.jpg" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
vinje et gallant
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="sparse-representations-in-a-nutshell"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.giphy.com/26xBtPbmDlugFxUiY.webp" alt="" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;in summary: Sparse representations resulting from these processes have been successfully applied in various domains such as image processing, computer vision, and audio signal processing. It has shown promise in tasks such as noise reduction, compression, feature extraction, and pattern recognition. By capturing the essential structure and characteristics of the data in a sparse representation, sparse coding can help reduce redundancy and noise, and extract meaningful features for further analysis or processing.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;let&amp;rsquo;s delve into a computational theory of sparse coding&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;review_bib = s.content_bib(&amp;ldquo;LP&amp;rdquo;, &amp;ldquo;2015&amp;rdquo;, &amp;lsquo;&amp;ldquo;Sparse models&amp;rdquo; in &lt;a href="https://laurentperrinet.github.io/publication/cristobal-perrinet-keil-15-bicv/"&gt;Biologically Inspired Computer Vision&lt;/a&gt;&amp;rsquo;)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-1"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-lp-et-al-2004httpslaurentperrinetgithubiopublicationperrinet-04-tauc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-04-tauc/featured.png" alt="[[LP *et al*, 2004](https://laurentperrinet.github.io/publication/perrinet-04-tauc/)]" loading="lazy" data-zoomable height="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-04-tauc/" target="_blank" rel="noopener"&gt;LP &lt;em&gt;et al&lt;/em&gt;, 2004&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-2"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-olshausen-and-field-1997httpmplabucsdedumarniigertolshaussen_1997pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Olshausen_2.png" alt="[[Olshausen and Field (1997)](http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf)]" loading="lazy" data-zoomable height="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf" target="_blank" rel="noopener"&gt;Olshausen and Field (1997)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-3"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Generative model of image synthesis:&lt;/p&gt;
&lt;p&gt;$I[x, y] = $
&lt;span class="fragment " &gt;
$\sum_{i=1}^{K} a[i] \cdot \phi[i, x, y]$
&lt;/span&gt;
&lt;span class="fragment " &gt;
$ + \varepsilon[x, y]$
&lt;/span&gt;&lt;/p&gt;
&lt;span class="fragment " &gt;
Where $\phi$ is a dictionary of $K$ atoms, $a$ is a sparse vector of coefficients, and $\varepsilon$ is a noise term.
&lt;/span&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2015)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;generative model&lt;/p&gt;
&lt;p&gt;\phi is over-complete (else it is triviallly solved by pseudo inverse)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-4"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-olshausen-and-field-1997httpmplabucsdedumarniigertolshaussen_1997pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Olshausen_1.png" alt="[[Olshausen and Field (1997)](http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf" target="_blank" rel="noopener"&gt;Olshausen and Field (1997)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-5"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
\end{aligned}
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-6"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
&amp;amp; = - \log Pr( I | a ) - \log Pr(a) \\
\end{aligned}
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-7"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
&amp;amp; = - \log Pr( I | a ) - \log Pr(a) \\
&amp;amp; = \frac{1}{2\sigma_n^2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 - \sum_{i=1}^{K} \log Pr( a[i] )
\end{aligned}
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
Probabilistic model
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-8"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;The problem is formalized as an optimization problem $a^\ast = \arg \min_a \mathcal{L}(a)$ with:&lt;/p&gt;
&lt;p&gt;$$
\mathcal{L} = \frac{1}{2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 + \lambda \cdot \sum_i ( a[i] \neq 0)
$$&lt;/p&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2015)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
spiking prior =&amp;gt; l0 pseudo norm
l0 problem is NP-complete
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-9"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;The problem is formalized as an optimization problem $a^\ast = \arg \min_a \mathcal{L}(a)$ with:&lt;/p&gt;
&lt;p&gt;$$
\mathcal{L}(a) = \frac{1}{2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 + \lambda \cdot \sum_{i=1}^{K} | a[i] |
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
exponential prior =&amp;gt; L1 norm
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-10"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-rentzeperis-et-al-2023httpslaurentperrinetgithubiopublicationrentzeperis-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/rentzeperis-23/featured.png" alt="[[Rentzeperis *et al* (2023)](https://laurentperrinet.github.io/publication/rentzeperis-23/)]" loading="lazy" data-zoomable height="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/rentzeperis-23/" target="_blank" rel="noopener"&gt;Rentzeperis &lt;em&gt;et al&lt;/em&gt; (2023)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="sparse-representations-and-learning"&gt;Sparse representations and learning&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/ssc.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_c.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-1"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/SDPC_3.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result on MNIST&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-1"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure4a.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-2"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure4b.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-3"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-4"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/training_video_ATT.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-1"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-2"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nice kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-3"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="neurosciences-and-sparsity-6"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-01-29-emergences/?transition=fade" target="_blank" rel="noopener"&gt;Neurosciences and sparsity&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-01-29-emergences/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="séminaire-à-l-1"&gt;&lt;u&gt;&lt;a href="https://www.pepr-ia.fr" target="_blank" rel="noopener"&gt;&lt;em&gt;Séminaire à l&amp;rsquo;atelier &amp;ldquo;IA embarquée&amp;rdquo; du PEPR IA&lt;/em&gt;&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-01-29-1"&gt;[2026-01-29]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt; --&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s sparse!&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>Markdown Slides Demo</title><link>https://laurentperrinet.github.io/slides/example/</link><pubDate>Mon, 15 Dec 2025 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/example/</guid><description>&lt;!-- no-branding --&gt;
&lt;h1 id="markdown-slides"&gt;Markdown Slides&lt;/h1&gt;
&lt;h3 id="write-in-markdown-present-anywhere"&gt;Write in Markdown. Present Anywhere.&lt;/h3&gt;
&lt;hr&gt;
&lt;h2 id="what-you-can-do"&gt;What You Can Do&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Write slides in &lt;strong&gt;pure Markdown&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Include &lt;strong&gt;code&lt;/strong&gt;, &lt;strong&gt;math&lt;/strong&gt;, and &lt;strong&gt;diagrams&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;speaker notes&lt;/strong&gt; for presenter view&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;progressive reveals&lt;/strong&gt; for storytelling&lt;/li&gt;
&lt;li&gt;Customize &lt;strong&gt;themes&lt;/strong&gt; and &lt;strong&gt;transitions&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="code-highlighting"&gt;Code Highlighting&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fibonacci&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fibonacci&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;fibonacci&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Calculate the 10th Fibonacci number&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fibonacci&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="c1"&gt;# Output: 55&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;hr&gt;
&lt;h2 id="mathematical-equations"&gt;Mathematical Equations&lt;/h2&gt;
&lt;p&gt;Einstein&amp;rsquo;s famous equation:&lt;/p&gt;
&lt;p&gt;$$E = mc^2$$&lt;/p&gt;
&lt;p&gt;The quadratic formula:&lt;/p&gt;
&lt;p&gt;$$x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="mermaid-diagrams"&gt;Mermaid Diagrams&lt;/h2&gt;
&lt;div class="mermaid"&gt;graph LR
A[Markdown] --&gt; B[Hugo]
B --&gt; C[Reveal.js]
C --&gt; D[Beautiful Slides]
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="progressive-reveals"&gt;Progressive Reveals&lt;/h2&gt;
&lt;p&gt;Build your narrative step by step:&lt;/p&gt;
&lt;span class="fragment " &gt;
First, introduce the concept
&lt;/span&gt;
&lt;span class="fragment " &gt;
Then, add supporting details
&lt;/span&gt;
&lt;span class="fragment " &gt;
Finally, deliver the conclusion
&lt;/span&gt;
&lt;hr&gt;
&lt;h2 id="speaker-notes"&gt;Speaker Notes&lt;/h2&gt;
&lt;p&gt;Press &lt;strong&gt;S&lt;/strong&gt; to open presenter view!&lt;/p&gt;
&lt;p&gt;Note:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;These notes are only visible in presenter mode&lt;/li&gt;
&lt;li&gt;Perfect for talking points and reminders&lt;/li&gt;
&lt;li&gt;Supports &lt;strong&gt;Markdown&lt;/strong&gt; formatting&lt;/li&gt;
&lt;li&gt;Add timing cues and references here&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="dual-column-layout"&gt;Dual Column Layout&lt;/h2&gt;
&lt;div class="r-hstack"&gt;
&lt;div style="flex: 1; padding-right: 1rem;"&gt;
&lt;h3 id="benefits"&gt;Benefits&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Open source&lt;/li&gt;
&lt;li&gt;Version control&lt;/li&gt;
&lt;li&gt;No vendor lock-in&lt;/li&gt;
&lt;li&gt;Works offline&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 1rem;"&gt;
&lt;h3 id="use-cases"&gt;Use Cases&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Tech talks&lt;/li&gt;
&lt;li&gt;Academic papers&lt;/li&gt;
&lt;li&gt;Team updates&lt;/li&gt;
&lt;li&gt;Training sessions&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-color="#1e3a8a"
&gt;
&lt;h2 id="custom-backgrounds"&gt;Custom Backgrounds&lt;/h2&gt;
&lt;p&gt;Slides can have &lt;strong&gt;custom colors&lt;/strong&gt; or images.&lt;/p&gt;
&lt;p&gt;Use &lt;code&gt;{{&amp;lt; slide background-color=&amp;quot;#hex&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="keyboard-shortcuts"&gt;Keyboard Shortcuts&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Action&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;→&lt;/code&gt; / &lt;code&gt;←&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Navigate slides&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;S&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Speaker notes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;F&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Fullscreen&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;O&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Overview mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ESC&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Exit modes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2 id="get-started"&gt;Get Started&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Create a file in &lt;code&gt;content/slides/&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Add front matter with &lt;code&gt;type: slides&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Write your content in Markdown&lt;/li&gt;
&lt;li&gt;Separate slides with &lt;code&gt;---&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h2 id="thank-you"&gt;Thank You!&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Questions?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/HugoBlox/kit" target="_blank" rel="noopener"&gt;HugoBlox/kit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Docs: &lt;a href="https://docs.hugoblox.com" target="_blank" rel="noopener"&gt;docs.hugoblox.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Built with Markdown Slides&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-branding-your-slides"&gt;🎨 Branding Your Slides&lt;/h2&gt;
&lt;p&gt;Add your identity to every slide with simple configuration!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What you can add:&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Position Options&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Logo&lt;/td&gt;
&lt;td&gt;top-left, top-right, bottom-left, bottom-right&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Title&lt;/td&gt;
&lt;td&gt;Same as above&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Author&lt;/td&gt;
&lt;td&gt;Same as above&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Footer Text&lt;/td&gt;
&lt;td&gt;Same + bottom-center&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Edit the &lt;code&gt;branding:&lt;/code&gt; section in your slide&amp;rsquo;s front matter (top of file).&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-adding-your-logo"&gt;📁 Adding Your Logo&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Place your logo in &lt;code&gt;assets/media/&lt;/code&gt; folder&lt;/li&gt;
&lt;li&gt;Use SVG format for best results (auto-adapts to any theme!)&lt;/li&gt;
&lt;li&gt;Add to front matter:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;branding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;logo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;your-logo.svg&amp;#34;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Must be in assets/media/&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;top-right&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;60px&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; SVGs with &lt;code&gt;fill=&amp;quot;currentColor&amp;quot;&lt;/code&gt; automatically match theme colors!&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-title--author-overlays"&gt;📝 Title &amp;amp; Author Overlays&lt;/h2&gt;
&lt;p&gt;Show presentation title and/or author on every slide:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;branding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;show&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;bottom-left&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;Short Title&amp;#34;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Optional: override long page title&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;author&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;show&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;bottom-right&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Author is auto-detected from page front matter (&lt;code&gt;author:&lt;/code&gt; or &lt;code&gt;authors:&lt;/code&gt;).&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-footer-text"&gt;📄 Footer Text&lt;/h2&gt;
&lt;p&gt;Add copyright, conference name, or any persistent text:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;branding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;footer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;© 2024 Your Name · ICML 2024&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;bottom-center&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Supports Markdown! Use &lt;code&gt;[Link](url)&lt;/code&gt; for clickable links.&lt;/p&gt;
&lt;hr&gt;
&lt;!-- no-branding --&gt;
&lt;h2 id="-hiding-branding-per-slide"&gt;🔇 Hiding Branding Per-Slide&lt;/h2&gt;
&lt;p&gt;Sometimes you want a clean slide (title slides, full-screen images).&lt;/p&gt;
&lt;p&gt;Add this comment at the &lt;strong&gt;start&lt;/strong&gt; of your slide content:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&amp;lt;!-- no-branding --&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="gu"&gt;## My Clean Slide
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="gu"&gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Content here...
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;☝️ &lt;strong&gt;This slide uses &lt;code&gt;&amp;lt;!-- no-branding --&amp;gt;&lt;/code&gt;&lt;/strong&gt; — notice no logo or overlays!&lt;/p&gt;
&lt;hr&gt;
&lt;!-- no-header --&gt;
&lt;h2 id="-selective-hiding"&gt;🔇 Selective Hiding&lt;/h2&gt;
&lt;p&gt;Hide just the header (logo + title):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&amp;lt;!-- no-header --&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Or just the footer (author + footer text):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&amp;lt;!-- no-footer --&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;☝️ &lt;strong&gt;This slide uses &lt;code&gt;&amp;lt;!-- no-header --&amp;gt;&lt;/code&gt;&lt;/strong&gt; — footer still visible below!&lt;/p&gt;
&lt;hr&gt;
&lt;!-- no-footer --&gt;
&lt;h2 id="-quick-reference"&gt;✅ Quick Reference&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Comment&lt;/th&gt;
&lt;th&gt;Hides&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;!-- no-branding --&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Everything (logo, title, author, footer)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;!-- no-header --&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Logo + Title overlay&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;!-- no-footer --&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Author + Footer text&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;☝️ &lt;strong&gt;This slide uses &lt;code&gt;&amp;lt;!-- no-footer --&amp;gt;&lt;/code&gt;&lt;/strong&gt; — logo still visible above!&lt;/p&gt;</description></item><item><title>2025-10-16-flash-lag-effect</title><link>https://laurentperrinet.github.io/slides/2025-10-16-flash-lag-effect/</link><pubDate>Thu, 16 Oct 2025 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2025-10-16-flash-lag-effect/</guid><description>&lt;section&gt;
&lt;h1 id="mislocalization-by-design"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2025-10-16-flash-lag-effect/?transition=fade" target="_blank" rel="noopener"&gt;Mislocalization by Design&lt;br&gt; The Flash-Lag Effect as Prediction&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet-cnrsamu-marseille-france"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet, CNRS/AMU, Marseille, France&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;table width="100%"&gt;
&lt;tr&gt;
&lt;th width="80%"&gt;
&lt;img src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/header.png" width="100%" &gt;
&lt;th width="20%"&gt;
&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/coverart.jpg" width="100%" &gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;h3 id="-suresh-krishna"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2025-10-16-flash-lag-effect/" target="_blank" rel="noopener"&gt;[2025-10-16]&lt;/a&gt; &lt;a href="https://neuromod.univ-cotedazur.eu" target="_blank" rel="noopener"&gt;Suresh Krishna&amp;rsquo;s lab meeting&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Mislocalization by Design: The Flash-Lag Effect as Prediction »&lt;/p&gt;
&lt;p&gt;Why do we sometimes misjudge where visual objects are? This talk explores how predictive processing may cause systematic perceptual mislocalizations. Indeed, the early visual system doesn&amp;rsquo;t passively process information—it actively predicts the world, compensating for neural delays by extrapolating motion trajectories. Using a Bayesian computational model, I show how this predictive mechanism explains the flash-lag effect: moving objects appear ahead of flashed ones because the brain forecasts their current position while the unpredictable flash cannot be anticipated. This framework reveals that mislocalization isn&amp;rsquo;t a bug but a feature of efficient visual coding. I&amp;rsquo;ll discuss how these principles illuminate both biological vision and artificial visual system design, demonstrating that what we perceive as &amp;ldquo;now&amp;rdquo; is actually the brain&amp;rsquo;s best prediction of the present.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="timing-in-the-visual-pathways"&gt;Timing in the visual pathways&lt;/h2&gt;
&lt;hr&gt;
&lt;figure id="figure-ultra-rapid-visual-processing-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-polychronies/featured.jpg" alt="Ultra-rapid visual processing ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Ultra-rapid visual processing (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-compensating-visual-delays-perrinet-adams--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Compensating visual delays ([Perrinet, Adams &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Compensating visual delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet, Adams &amp;amp; Friston 2014&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-compensating-visual-delays-perrinet-adams--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Compensating visual delays ([Perrinet Adams &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Compensating visual delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet Adams &amp;amp; Friston, 2014&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/line_motion.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/phi_motion.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-suppressive-travelling-waves-chemla-et-al-2019httpslaurentperrinetgithubiopublicationchemla-19"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://raw.githubusercontent.com/laurentperrinet/2019-04-18_JNLF/master/figures/Chemla_etal2019.png" alt="Suppressive travelling waves ([Chemla *et al*, 2019](https://laurentperrinet.github.io/publication/chemla-19/))." loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Suppressive travelling waves (&lt;a href="https://laurentperrinet.github.io/publication/chemla-19/" target="_blank" rel="noopener"&gt;Chemla &lt;em&gt;et al&lt;/em&gt;, 2019&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="predictive-processing"&gt;Predictive processing&lt;/h2&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/aperture_aperture.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;!--
---
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/aperture_box.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/aperture_cube.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-motion-based-prediction-perrinet-et-al-2012httpslaurentperrinetgithubiopublicationperrinet-12-pred"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/navier.svg" alt="Motion-based prediction ([Perrinet *et al*, 2012](https://laurentperrinet.github.io/publication/perrinet-12-pred/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Motion-based prediction (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-12-pred/" target="_blank" rel="noopener"&gt;Perrinet &lt;em&gt;et al&lt;/em&gt;, 2012&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-motion-based-prediction-perrinet-et-al-2012httpslaurentperrinetgithubiopublicationperrinet-12-pred"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/perrinet12pred_figure2.png" alt="Motion-based prediction ([Perrinet *et al*, 2012](https://laurentperrinet.github.io/publication/perrinet-12-pred/))." loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Motion-based prediction (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-12-pred/" target="_blank" rel="noopener"&gt;Perrinet &lt;em&gt;et al&lt;/em&gt;, 2012&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/line_particles.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Motion-based prediction (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-12-pred/" target="_blank" rel="noopener"&gt;Perrinet &lt;em&gt;et al&lt;/em&gt;, 2012&lt;/a&gt;).&lt;/p&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="flash-lag-effect"&gt;Flash-lag effect&lt;/h2&gt;
&lt;hr&gt;
&lt;figure id="figure-flash-lag-effect-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_cartoon.jpg" alt="Flash-lag effect ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Flash-lag effect (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal Markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal Markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;p&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;span class="fragment " &gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;&lt;/p&gt;
&lt;/span&gt;
&lt;p&gt;Flash-lag effect: MBP (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).&lt;/p&gt;
&lt;hr&gt;
&lt;figure id="figure-flash-lag-effect-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE.jpg" alt="Flash-lag effect ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Flash-lag effect (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;!--
&lt;figure id="figure-space-time-probability-distributions-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_histogram.jpg" alt="Space-time probability distributions ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Space-time probability distributions (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
--- --&gt;
&lt;figure id="figure-space-time-probability-distributions-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_histogram_comp.jpg" alt="Space-time probability distributions ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Space-time probability distributions (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-motion-reversal-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_MotionReversal_MBP.jpg" alt="Motion reversal ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Motion reversal (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-motion-reversal-smoothed-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_MotionReversal.jpg" alt="Motion reversal (smoothed) ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Motion reversal (smoothed) (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag_stop.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-space-time-probability-distributions-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_histogram.jpg" alt="Space-time probability distributions ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Space-time probability distributions (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-limit-cycles-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_limit_cycles.jpg" alt="Limit cycles ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Limit cycles (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="mislocalization-by-design-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2025-10-16-flash-lag-effect/?transition=fade" target="_blank" rel="noopener"&gt;Mislocalization by Design&lt;br&gt; The Flash-Lag Effect as Prediction&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet-cnrsamu-marseille-france-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet, CNRS/AMU, Marseille, France&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;table width="100%"&gt;
&lt;tr&gt;
&lt;th width="80%"&gt;
&lt;img src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/header.png" width="100%" &gt;
&lt;th width="20%"&gt;
&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/coverart.jpg" width="100%" &gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;h3 id="-suresh-krishna-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2025-10-16-flash-lag-effect/" target="_blank" rel="noopener"&gt;[2025-10-16]&lt;/a&gt; &lt;a href="https://neuromod.univ-cotedazur.eu" target="_blank" rel="noopener"&gt;Suresh Krishna&amp;rsquo;s lab meeting&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Mislocalization by Design: The Flash-Lag Effect as Prediction »&lt;/p&gt;
&lt;p&gt;Why do we sometimes misjudge where visual objects are? This talk explores how predictive processing may cause systematic perceptual mislocalizations. Indeed, the early visual system doesn&amp;rsquo;t passively process information—it actively predicts the world, compensating for neural delays by extrapolating motion trajectories. Using a Bayesian computational model, I show how this predictive mechanism explains the flash-lag effect: moving objects appear ahead of flashed ones because the brain forecasts their current position while the unpredictable flash cannot be anticipated. This framework reveals that mislocalization isn&amp;rsquo;t a bug but a feature of efficient visual coding. I&amp;rsquo;ll discuss how these principles illuminate both biological vision and artificial visual system design, demonstrating that what we perceive as &amp;ldquo;now&amp;rdquo; is actually the brain&amp;rsquo;s best prediction of the present.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2025-05-26-master-m-4-nc</title><link>https://laurentperrinet.github.io/slides/2025-05-26-master-m-4-nc/</link><pubDate>Mon, 26 May 2025 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2025-05-26-master-m-4-nc/</guid><description>&lt;section&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2025-05-26-master-m-4-nc/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;h3 id="-master-m4nc-de-l"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2025-05-26-master-m-4-nc/" target="_blank" rel="noopener"&gt;[2025-05-26]&lt;/a&gt; &lt;a href="https://neuromod.univ-cotedazur.eu" target="_blank" rel="noopener"&gt;Master M4NC de l&amp;rsquo;institut NeuroMod, cours Prospective Innovation and Research&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;outline =&lt;/li&gt;
&lt;li&gt;fact: paradoxically vision is a complex process for the simplest function&lt;/li&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg"
&gt;
&lt;!-- &lt;img src="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg" width="80%"/&gt; --&gt;
&lt;aside class="notes"&gt;
Paysage catalan (Le Chasseur)
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="principles-of-vision"&gt;Principles of Vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cut in different levels: Marr (+ Poggio)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;arbitrary, but useful division of labor= computational / algorithm / hardware&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dynamics (computational)&lt;/li&gt;
&lt;li&gt;CNNs (hardware)&lt;/li&gt;
&lt;li&gt;spiking (algorithm)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First: What is the function of vision?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-ilya-repin-1884httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[An Unexpected Visitor (Ilya Repin, 1884)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Ilya Repin, 1884)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;seeing= interacting with the visual world&lt;/li&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-1"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[An Unexpected Visitor (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: the eye is always moving&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Alfred_Iarbous" target="_blank" rel="noopener"&gt;https://fr.wikipedia.org/wiki/Alfred_Iarbous&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;consistency of eye traces&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-2"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---age-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_003.jpg" alt="[An Unexpected Visitor - *Age?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-3"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---how-long-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_006.jpg" alt="[An Unexpected Visitor - *How long?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;How long?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: depends on task&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Visual illusions are a great way to understand the constraints of vision&lt;/li&gt;
&lt;li&gt;notce that here the illusion depend on your eye movements&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a simpler one showing effect of context&lt;/li&gt;
&lt;li&gt;here the ever changing lighting conditions from moonlight (1 candela) to sunlight (100 000 candela)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;the process of inverting the reason of an illusion can be intriguing&lt;/li&gt;
&lt;li&gt;hering: two parallel lines&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;appear bent&lt;/li&gt;
&lt;li&gt;effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-4"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;more generally it reveals vision generates a model of the world&lt;/li&gt;
&lt;li&gt;pareidolia: seeing faces in clouds, or a man on mars&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;30 years later&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip; it&amp;rsquo;s just a rock&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="principles-of-vision-1"&gt;Principles of vision?&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we know more about the function&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;let&amp;rsquo;s delve into a computational theory of vision&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="computational-neuroscience-of-vision-1"&gt;Computational neuroscience of vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it&amp;rsquo;s a multi-scale, complex model&amp;hellip;&lt;/li&gt;
&lt;li&gt;perhaps we will never be able to comprehend it in full&lt;/li&gt;
&lt;li&gt;words are not precise enough, let&amp;rsquo;s use mathematics and models to describe this system&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-human-visual-system"&gt;Anatomy of the Human Visual system&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s start with the anatomy&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="human-visual-system--the-hmax-model"&gt;Human Visual system : the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2007"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.researchgate.net/profile/Thomas-Serre/publication/253467382/figure/fig1/AS:298143448092675@1448094345807/a-Organization-of-the-visual-cortex-The-diagram-is-modified-from-Gross-1998-Key.png" alt="[Serre and Poggio, 2007]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Serre and Poggio, 2007]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and a model of it&amp;hellip;(&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;CNN, the mother of all deep learning models&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex"&gt;Primary visual cortex&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s zoom in, the basic ingredient is the receptive field&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-1"&gt;Primary visual cortex&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="hybrid-ia-models"&gt;Hybrid IA models&lt;/h2&gt;
&lt;figure id="figure-using-goal-driven-deep-learning-models-to-understand-sensory-cortex-yamins--dicarlo-2016"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://knu-brainai.github.io/images/cnn.png" alt="Using goal-driven deep learning models to understand sensory cortex [Yamins &amp; DiCarlo, 2016] " loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Using goal-driven deep learning models to understand sensory cortex [Yamins &amp;amp; DiCarlo, 2016]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-nets-cnn"&gt;Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;this can be integrated in a hierarchy&amp;hellip;&lt;/li&gt;
&lt;li&gt;defining a Convolutional Neural Networks (CNN)&lt;/li&gt;
&lt;li&gt;one layer is a convolution&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-nets-cnn-1"&gt;Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;!--
---
## CNN: Mathematics
* One-dimensional [discrete convolution](https://en.wikipedia.org/wiki/Convolution#Discrete_convolution) (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and be formalized as a convolution&amp;hellip;&lt;/li&gt;
&lt;li&gt;but what is a convolution?&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s start in 1D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## CNN: Mathematics
* Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:
$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now in 2D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## CNN: Mathematics
* **Cross-correlation** of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:
$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;note the difference between convolutions and cross-correlation&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## CNN: Mathematics
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it is a translation-invariant feature detector&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## CNN: Mathematics
* Correlation of an image defined on several channels (note [the order of the indices](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html)):
$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{c,i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we can add different channels to the image (eg colors)&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## CNN: Mathematics
* Correlation of a multi-channel image for multiple output channels (note [the order of the indices](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html)):
$$
(f \ast \tilde{g})[k, x, y] = \sum_{c,i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now we get to the full CNN&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## CNN: the HMAX model
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
--&gt;
&lt;hr&gt;
&lt;h2 id="cnn-challenges"&gt;CNN: challenges&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-predictive-processing"&gt;CNN: Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-predictive-processing-1"&gt;CNN: Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-topography"&gt;CNN: Topography&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;topography?&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-topography-1"&gt;CNN: Topography&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= bio-mimetism&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="computational-neuroscience-of-vision-2"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;neuroAI&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;!--
---
&lt;section&gt;
# Dynamics of vision
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;another important missing feature: time&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-visual-latencies-grimaldi-et-al-2022httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency_bg.jpg" alt="Visual latencies [[Grimaldi *et al*, 2022]](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In particular in our group, we are interested in dynamics of neural processing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the process of categorizing an object takes 10 layers&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston, 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Dynamics of vision
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
Flash-lag effect: MBP ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
# Dynamics of vision
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
--&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks-snn"&gt;Spiking Neural Networks (SNN)&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-leaky-integrate-and-fire-neuron"&gt;SNN: Leaky Integrate-and-Fire Neuron&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-1"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-2"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-3"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Izhikevich polychronization&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;yet the domain is vast, and there s lot to do in SNNs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs-1"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs-2"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;event-based cameras&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-1"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-2"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-3"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nice kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-4"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="spiking-neural-networks-snn-1"&gt;Spiking Neural Networks (SNN)&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="artificial-neural-networks-applied-to-the-understanding-of-biological-vision"&gt;Artificial neural networks applied to the understanding of biological vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2025-05-26-master-m-4-nc/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;h3 id="-master-m4nc-de-l-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2025-05-26-master-m-4-nc/" target="_blank" rel="noopener"&gt;[2025-05-26]&lt;/a&gt; &lt;a href="https://neuromod.univ-cotedazur.eu" target="_blank" rel="noopener"&gt;Master M4NC de l&amp;rsquo;institut NeuroMod, cours Prospective Innovation and Research&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&lt;/li&gt;
&lt;li&gt;outline&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2025-04-18-vibration-apparences</title><link>https://laurentperrinet.github.io/slides/2025-04-18-vibration-apparences/</link><pubDate>Fri, 18 Apr 2025 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2025-04-18-vibration-apparences/</guid><description>&lt;section&gt;
&lt;h1 id="la-vibration-des-apparences"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2025-04-18-vibration-apparences/?transition=fade" target="_blank" rel="noopener"&gt;La vibration des apparences&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="journées-douverture-scientifique-jos"&gt;&lt;u&gt;&lt;a href="https://jos.lis-lab.fr/" target="_blank" rel="noopener"&gt;Journées d’Ouverture Scientifique (JOS)&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2025-04-18"&gt;[2025-04-18]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;a href="https://laurentperrinet.github.io/project/art-science/" target="_blank" rel="noopener"&gt;Art-Sciences&lt;/a&gt; /
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt; --&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;outline =&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning&lt;/li&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Not only the speaker can read these notes, Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="lirraisonnable-efficacité-de-la-vision"&gt;&amp;ldquo;L&amp;rsquo;irraisonnable efficacité de la vision&amp;rdquo;&lt;/h2&gt;
&lt;figure id="figure-comment-la-vision-a-évolué-perrinet-2024httpstheconversationcomchats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://images.theconversation.com/files/568221/original/file-20240108-17-78s0cj.png" alt="Comment la vision a évolué... [[Perrinet, 2024]](https://theconversation.com/chats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083) " loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Comment la vision a évolué&amp;hellip; &lt;a href="https://theconversation.com/chats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083" target="_blank" rel="noopener"&gt;[Perrinet, 2024]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Visual illusions are a great way to understand the constraints of vision&lt;/li&gt;
&lt;li&gt;notce that here the illusion depend on your eye movements&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a simpler one showing effect of context&lt;/li&gt;
&lt;li&gt;here the ever changing lighting conditions from moonlight (1 candela) to sunlight (100 000 candela)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;the process of inverting the reason of an illusion can be intriguing&lt;/li&gt;
&lt;li&gt;hering: two parallel lines&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;appear bent&lt;/li&gt;
&lt;li&gt;effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;more generally it reveals vision generates a model of the world&lt;/li&gt;
&lt;li&gt;pareidolia: seeing faces in clouds, or a man on mars&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;30 years later&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip; it&amp;rsquo;s just a rock&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="neurosciences-computationnelles-de-la-vision"&gt;Neurosciences computationnelles de la vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Les neurosciences computationnelles sont les sciences qui essaient d’extraire de nos connaissances en neurosciences biologiques des principes computationnels, comme le neurone formel et sa capacité d’apprentissage, qui est la brique de base des réseaux de neurones. Ces derniers ont conduit à la révolution de l’IA avec les réseaux profonds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;it&amp;rsquo;s a multi-scale, complex model&amp;hellip;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;perhaps we will never be able to comprehend it in full&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;words are not precise enough, let&amp;rsquo;s use mathematics and models to describe this system&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="anatomie-du-système-visuel-humain"&gt;Anatomie du système visuel humain&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s start with the anatomy&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cortex-visuel-primaire"&gt;Cortex visuel primaire&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s zoom in, the basic ingredient is the receptive field&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cortex-visuel-primaire-1"&gt;Cortex visuel primaire&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="modèles-hybrides-dia"&gt;Modèles hybrides d&amp;rsquo;IA&lt;/h2&gt;
&lt;figure id="figure-using-goal-driven-deep-learning-models-to-understand-sensory-cortex-yamins--dicarlo-2016"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://knu-brainai.github.io/images/cnn.png" alt="Using goal-driven deep learning models to understand sensory cortex [Yamins &amp; DiCarlo, 2016] " loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Using goal-driven deep learning models to understand sensory cortex [Yamins &amp;amp; DiCarlo, 2016]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="art--sciences"&gt;Art &amp;amp; Sciences&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure id="figure-etienne-reyhttpslaurentperrinetgithubioauthoretienne-rey"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/author/etienne-rey/avatar.jpg" alt="[Etienne Rey](https://laurentperrinet.github.io/author/etienne-rey/)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/author/etienne-rey/" target="_blank" rel="noopener"&gt;Etienne Rey&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;a href="https://github.com/NaturalPatterns/2013_Tropique" target="_blank" rel="noopener"&gt;https://github.com/NaturalPatterns/2013_Tropique&lt;/a&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-spectre-audiographique--diffractionhttpsondesparallelesorgprojetscloche-spectre-audiographique-diffraction"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://ondesparalleles.org/wp-content/uploads/2014/02/cloche_fiche_a.jpg" alt="[Etienne Rey, SPECTRE AUDIOGRAPHIQUE – DIFFRACTION](https://ondesparalleles.org/projets/cloche-spectre-audiographique-diffraction/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/cloche-spectre-audiographique-diffraction/" target="_blank" rel="noopener"&gt;Etienne Rey, SPECTRE AUDIOGRAPHIQUE – DIFFRACTION&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
/Users/laurentperrinet/sdrive_cnrs/blog/laurentperrinet.github.io_hugo/content/talk/2010-04-14-ondes-paralleles/index.md
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="tropique"&gt;Tropique&lt;/h2&gt;
&lt;figure id="figure-etienne-rey-tropiquehttpsondesparallelesorgprojetstropique-7"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://ondesparalleles.org/wp-content/uploads/2014/02/tropique_fiche_b.jpg" alt="[Etienne Rey, Tropique](https://ondesparalleles.org/projets/tropique-7/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/tropique-7/" target="_blank" rel="noopener"&gt;Etienne Rey, Tropique&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="tropique-1"&gt;Tropique&lt;/h2&gt;
&lt;iframe src="https://player.vimeo.com/video/66161665" width="640" height="360" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;hr&gt;
&lt;h2 id="tropique-2"&gt;Tropique&lt;/h2&gt;
&lt;iframe src="https://player.vimeo.com/video/56198653" width="640" height="360" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-cristal-n2httpsondesparallelesorgprojetscristal-n2__trashed"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://ondesparalleles.org/wp-content/uploads/2014/04/etienne_rey_horizons_variables_news2.jpg" alt="[Etienne Rey, Cristal n2](https://ondesparalleles.org/projets/cristal-n2__trashed/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/cristal-n2__trashed/" target="_blank" rel="noopener"&gt;Etienne Rey, Cristal n2&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-trame-élasticitéhttpsondesparallelesorgprojetstrame-elasticite-vasarely"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2016-06-02_elasticite/TRAME_Elasticit%c3%a9.jpg" alt="[Etienne Rey, TRAME ÉLASTICITÉ](https://ondesparalleles.org/projets/trame-elasticite-vasarely/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://ondesparalleles.org/projets/trame-elasticite-vasarely/" target="_blank" rel="noopener"&gt;Etienne Rey, TRAME ÉLASTICITÉ&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="trame-élasticité"&gt;TRAME ÉLASTICITÉ&lt;/h2&gt;
&lt;iframe src="https://player.vimeo.com/video/198189587" width="640" height="360" frameborder="0" allow="autoplay; fullscreen" allowfullscreen&gt;&lt;/iframe&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="de-la-nature-des-choses"&gt;De la nature des choses&lt;/h2&gt;
&lt;figure id="figure-phyllotaxiehttpsfrwikipediaorgwikiphyllotaxie"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/9/90/Phyllotaxis_golden_angle.svg" alt="[Phyllotaxie](https://fr.wikipedia.org/wiki/Phyllotaxie)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Phyllotaxie" target="_blank" rel="noopener"&gt;Phyllotaxie&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Par &lt;a href="//commons.wikimedia.org/wiki/User:Cmglee" title="User:Cmglee"&gt;Cmglee&lt;/a&gt; — &lt;span class="int-own-work" lang="fr"&gt;Travail personnel&lt;/span&gt;, &lt;a href="https://creativecommons.org/licenses/by-sa/4.0" title="Creative Commons Attribution-Share Alike 4.0"&gt;CC BY-SA 4.0&lt;/a&gt;, &lt;a href="https://commons.wikimedia.org/w/index.php?curid=146404567"&gt;Lien&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;!--
## De la nature des choses
&lt;figure id="figure-ngc-4414httpsfrwikipediaorgwikigalaxie_spirale"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/c/c3/NGC_4414_%28NASA-med%29.jpg" alt="[NGC 4414](https://fr.wikipedia.org/wiki/Galaxie_spirale)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Galaxie_spirale" target="_blank" rel="noopener"&gt;NGC 4414&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://upload.wikimedia.org/wikipedia/commons/c/c3/NGC_4414_%28NASA-med%29.jpg"
&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-densité-flouhttpslaurentperrinetgithubiopost2019-06-22_ardemone"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2019-06-22_ardemone/featured.png" alt="[Etienne Rey, Densité flou](https://laurentperrinet.github.io/post/2019-06-22_ardemone/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/post/2019-06-22_ardemone/" target="_blank" rel="noopener"&gt;Etienne Rey, Densité flou&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-horizon-faillehttpslaurentperrinetgithubiopost2021-10-04_interstices"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2021-10-04_interstices/featured.jpg" alt="[Etienne Rey, Horizon Faille](https://laurentperrinet.github.io/post/2021-10-04_interstices/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/post/2021-10-04_interstices/" target="_blank" rel="noopener"&gt;Etienne Rey, Horizon Faille&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="caustiques"&gt;Caustiques&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://github.com/NaturalPatterns/2020_caustiques/raw/main/iridiscence.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;!--
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://laurentperrinet.github.io/post/2024-11-07_vibration-apparences/featured.jpg"
&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2024-11-07_vibration-apparences/featured.jpg" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences-1"&gt;La vibration des apparences&lt;/h2&gt;
&lt;figure id="figure-paul-cézanne-montagne-sainte-victoire-1904httpsenwikipediaorgwikipaul_cc3a9zanne"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/c/c9/Montagne_Sainte-Victoire%2C_par_Paul_C%C3%A9zanne_108.jpg" alt="[Paul Cézanne, Montagne Sainte-Victoire, 1904](https://en.wikipedia.org/wiki/Paul_C%C3%A9zanne)" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Paul_C%C3%A9zanne" target="_blank" rel="noopener"&gt;Paul Cézanne, Montagne Sainte-Victoire, 1904&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences-2"&gt;La vibration des apparences&lt;/h2&gt;
&lt;figure id="figure-merleau-ponty-sens-et-non-senshttpslaurentperrinetgithubioauthoretienne-rey"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/Merleau-Ponty_Sens-et-non-sens.png" alt="[Merleau-Ponty, Sens et non-sens](https://laurentperrinet.github.io/author/etienne-rey/)" loading="lazy" data-zoomable width="62%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/author/etienne-rey/" target="_blank" rel="noopener"&gt;Merleau-Ponty, Sens et non-sens&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-trameshttpslaurentperrinetgithubiopost2018-04-10_trames"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2018-04-10_trames/featured.png" alt="[Etienne Rey, Trames](https://laurentperrinet.github.io/post/2018-04-10_trames/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/post/2018-04-10_trames/" target="_blank" rel="noopener"&gt;Etienne Rey, Trames&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences-3"&gt;La vibration des apparences&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/visite_virtuelle.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences-4"&gt;La vibration des apparences&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/video1.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_phi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;34&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;233&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;retino_grid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_phi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;size_mag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;ecc_max&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;power&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;operator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;both&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set_operator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;operator&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# https://laurentperrinet.github.io/sciblog/posts/2020-04-16-creating-an-hexagonal-grid.html&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;phi_v&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rho_v&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;meshgrid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;linspace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_phi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;False&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;linspace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ecc_max&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:],&lt;/span&gt; &lt;span class="n"&gt;sparse&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indexing&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;xy&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;phi_v&lt;/span&gt;&lt;span class="p"&gt;[::&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;:]&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;N_phi&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;offsets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;colors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;c1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;offset_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offsets&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;colors&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# convert to cartesian coordinates&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rho_v&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;phi_v&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;offset_&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;Y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rho_v&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cos&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;phi_v&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;Y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Y&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;R&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;size_mag&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;rho_v&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;power&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;N_rho&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# draw &lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ravel&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;Y&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ravel&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;R&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ravel&lt;/span&gt;&lt;span class="p"&gt;()):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;circle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set_source_rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;hue_to_rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fill&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;cr&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;c_blue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;240&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;dc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;opts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_rho&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_phi&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_phi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.07&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;size_mag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ecc_max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;c_blue&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;c_blue&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="n"&gt;dc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;power&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;operator&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cairo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;OPERATOR_MULTIPLY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nd"&gt;@disp&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;draw&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_H&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;N_V&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="n"&gt;cr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retino_grid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;!--
---
## La vibration des apparences
&lt;figure id="figure-etienne-rey-la-vibration-des-apparenceshttpslaurentperrinetgithubiotalk2025-04-18-vibration-apparences"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2018-04-10_trames/featured.png" alt="[Etienne Rey, La vibration des apparences](https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/)" loading="lazy" data-zoomable width="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/" target="_blank" rel="noopener"&gt;Etienne Rey, La vibration des apparences&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/video1.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;hr&gt;
&lt;figure id="figure-etienne-rey-la-vibration-des-apparenceshttpslaurentperrinetgithubiotalk2025-04-18-vibration-apparences"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/2024-09-04_canaux_both.png" alt="[Etienne Rey, La vibration des apparences](https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/)" loading="lazy" data-zoomable height="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/" target="_blank" rel="noopener"&gt;Etienne Rey, La vibration des apparences&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/2025-01-18_la-vibration-des-apparences.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;h2 id="la-vibration-des-apparences-5"&gt;La vibration des apparences&lt;/h2&gt;
&lt;iframe width="640" height="360" frameborder="0" src="https://www.shadertoy.com/embed/3Xf3W4?gui=true&amp;t=10&amp;paused=true&amp;muted=false" allowfullscreen&gt;&lt;/iframe&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="la-vibration-des-apparences-6"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2025-04-18-vibration-apparences/?transition=fade" target="_blank" rel="noopener"&gt;La vibration des apparences&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2025-04-18-vibration-apparences/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="journées-douverture-scientifique-jos-1"&gt;&lt;u&gt;&lt;a href="https://jos.lis-lab.fr/" target="_blank" rel="noopener"&gt;Journées d’Ouverture Scientifique (JOS)&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2025-04-18-1"&gt;[2025-04-18]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;a href="https://laurentperrinet.github.io/project/art-science/" target="_blank" rel="noopener"&gt;Art-Sciences&lt;/a&gt; /
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt; --&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;to summarize=&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2025-03-11-phd-program-sparse-representations</title><link>https://laurentperrinet.github.io/slides/2025-03-11-phd-program-sparse-representations/</link><pubDate>Tue, 11 Mar 2025 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2025-03-11-phd-program-sparse-representations/</guid><description>&lt;section&gt;
&lt;h1 id="sparse-representations"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2025-03-11-phd-program-sparse-representations/?transition=fade" target="_blank" rel="noopener"&gt;Sparse representations&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2025-03-11-phd-program-sparse-representations/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="neuroschool-phd-program-in-neuroscience"&gt;&lt;u&gt;&lt;a href="https://neuro-marseille.org/en/training/phd-program/" target="_blank" rel="noopener"&gt;NeuroSchool PhD Program in Neuroscience&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2025-03-11"&gt;[2025-03-11]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;a href="https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience" target="_blank" rel="noopener"&gt;Code&lt;/a&gt; /
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;
&lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;outline =&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s sparse!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;in practice: sparse coding in a nutshell&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;perspective: convolutional sparse coding&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;url_code = &lt;a href="https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience" target="_blank" rel="noopener"&gt;https://github.com/CONECT-INT/2025-03_PhDProgram-course-in-computational-neuroscience&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Not only the speaker can read these notes, Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="sparse-representations-1"&gt;Sparse representations?&lt;/h2&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.vhv.rs/dpng/d/57-574294_old-man-shrugging-shoulders-meme-hd-png-download.png" alt="" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.imgflip.com/2lmff7.jpg" alt="" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
you may have heard of it but do you know what it is ?
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg"
&gt;
&lt;!-- &lt;img src="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg" width="80%"/&gt; --&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Paysage catalan (Le Chasseur)&lt;/p&gt;
&lt;p&gt;to rephrase the expression &lt;a href="https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences" target="_blank" rel="noopener"&gt;&amp;ldquo;The Unreasonable Effectiveness of Mathematics&amp;rdquo;&lt;/a&gt; by Wigner, the &amp;ldquo;Unreasonable efficiency of vision&amp;rdquo; is playfully illustrated in this painting from Joan Miró, which allows us to depict this Catalan landscape with the a few strokes where our imagination will fill the gaps and signify the landscape, allowing us to imagine the hunter, the sardine or the plane.&lt;/p&gt;
&lt;p&gt;the whole is the sum of a few parts&lt;/p&gt;
&lt;p&gt;Sparse coding is a technique used in signal processing and machine learning to represent data in a more concise and efficient manner. It aims to find a sparse representation of the data, which means representing the data with only a small number of non-zero coefficients or activations. In sparse coding, a set of basis functions or atoms is typically defined, and the goal is to find a linear combination of these atoms that best represents the input data. The coefficients of this linear combination are often constrained to be sparse, meaning that only a few of them are allowed to be non-zero.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-computer-vision"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;figure id="figure-lp-et-al-2004httpslaurentperrinetgithubiopublicationperrinet-04-tauc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-04-tauc/featured.png" alt="[[LP *et al*, 2004](https://laurentperrinet.github.io/publication/perrinet-04-tauc/)]" loading="lazy" data-zoomable height="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-04-tauc/" target="_blank" rel="noopener"&gt;LP &lt;em&gt;et al&lt;/em&gt;, 2004&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;vision is an inverse problem&lt;/p&gt;
&lt;p&gt;link with autoencoder&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-computer-vision-1"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/publication/perrinet-03-ieee/v1_tiger.gif" width="60%"/&gt;
&lt;aside class="notes"&gt;
ça marche très bien!
&lt;/aside&gt;
---
## Convolutional Sparse Coding --&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2015-05-22-a-hitchhiker-guide-to-matching-pursuit/MPtutorial_rec.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Code @ &lt;a href="https://laurentperrinet.github.io/sciblog/posts/2015-05-22-a-hitchhiker-guide-to-matching-pursuit.html" target="_blank" rel="noopener"&gt;A hitchhiker guide to Matching Pursuit&lt;/a&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-computer-vision-2"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;figure id="figure-lp-and-bednar-2015httpslaurentperrinetgithubiopublicationperrinet-bednar-15"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/figures/figure_synthesis.svg" alt="[[LP and Bednar, 2015]](https://laurentperrinet.github.io/publication/perrinet-bednar-15/)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/perrinet-bednar-15/" target="_blank" rel="noopener"&gt;[LP and Bednar, 2015]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;extracting edges is useful&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-computer-vision-3"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;figure id="figure-lp-2021httpslaurentperrinetgithubiosciblogposts2021-03-27-density-of-stars-on-the-surface-of-the-skyhtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/sciblog/files/2021-03-27_generative.png" alt="[[LP, 2021](https://laurentperrinet.github.io/sciblog/posts/2021-03-27-density-of-stars-on-the-surface-of-the-sky.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/sciblog/posts/2021-03-27-density-of-stars-on-the-surface-of-the-sky.html" target="_blank" rel="noopener"&gt;LP, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
an extreme case: astrophysics
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuromorphic-engineering"&gt;Sparse representations in neuromorphic engineering&lt;/h2&gt;
&lt;p&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_arm-roll.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_hand-clap.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_air-guitar.webp" width="33%"/&gt;&lt;/p&gt;
&lt;!--
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/events.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Ultimately, we get a list of events for each pixel that can be &lt;em&gt;merged&lt;/em&gt; to represent the entire image. This list of events includes pixel addresses, times of occurrence, and polarities. Note that since events are generated over time, they are naturally sorted by their time of occurrence. These events are then transmitted in &lt;em&gt;real time&lt;/em&gt; to the output bus, often via a USB3 connection.
It&amp;rsquo;s interesting to draw a parallel between this process and the optic nerve that connects our retina to the brain. In fact, the output of the retina consists of a million ganglion cells that emit action potentials, which are the only source of information transmitted by the &lt;em&gt;optic nerve&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg" target="_blank" rel="noopener"&gt;https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;!--
---
## Sparse representations in neuromorphic engineering
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;kernels learned for motion detection&lt;/li&gt;
&lt;li&gt;can we force a sparse connectivity (beware that&amp;rsquo;s diferent from sparse activity)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
## Sparse representations in neuromorphic engineering
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;yes, the accuracy drops, but it&amp;rsquo;s still good enough with a 500x sparsity&lt;/li&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-brunel-2001httpsbooksgooglefrbookshlfrlridb8wodqwdtsscoifndpgpa307otsknhqrj-tszsig0wi2cq2rnmxc7fvtyjoewzedlcgredir_escyvonepageqffalse"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Brunel200Fig2.png" alt="[[Brunel, 2001](https://books.google.fr/books?hl=fr&amp;lr=&amp;id=b8woDqWdTssC&amp;oi=fnd&amp;pg=PA307&amp;ots=KNHQrJ-TsZ&amp;sig=0WI2cq2RnMXC7fVTyjOEWZEdlCg&amp;redir_esc=y#v=onepage&amp;q&amp;f=false)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://books.google.fr/books?hl=fr&amp;amp;lr=&amp;amp;id=b8woDqWdTssC&amp;amp;oi=fnd&amp;amp;pg=PA307&amp;amp;ots=KNHQrJ-TsZ&amp;amp;sig=0WI2cq2RnMXC7fVTyjOEWZEdlCg&amp;amp;redir_esc=y#v=onepage&amp;amp;q&amp;amp;f=false" target="_blank" rel="noopener"&gt;Brunel, 2001&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Phase diagrams of sparsely connected networks of excitatory and inhibitory spiking neurons&lt;/p&gt;
&lt;p&gt;healthy network = 1Hz = sparse activity (stronger in auditory, in insects, &amp;hellip;)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience-1"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience-2"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/fncir-10-00037-g001a.jpg" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience-3"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/fncir-10-00037-g001b.jpg" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience-4"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/fncir-10-00037-g001.jpg" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
vinje et gallant
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-2"&gt;Sparse representations?&lt;/h2&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.vhv.rs/dpng/d/57-574294_old-man-shrugging-shoulders-meme-hd-png-download.png" alt="" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://memecreator.org/static/images/memes/5646953.jpg" alt="" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
in summary: Sparse representations resulting from these processes have been successfully applied in various domains such as image processing, computer vision, and audio signal processing. It has shown promise in tasks such as noise reduction, compression, feature extraction, and pattern recognition. By capturing the essential structure and characteristics of the data in a sparse representation, sparse coding can help reduce redundancy and noise, and extract meaningful features for further analysis or processing.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="sparse-representations-in-a-nutshell"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.giphy.com/26xBtPbmDlugFxUiY.webp" alt="" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;let&amp;rsquo;s delve into a computational theory of sparse coding&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;review_bib = s.content_bib(&amp;ldquo;LP&amp;rdquo;, &amp;ldquo;2015&amp;rdquo;, &amp;lsquo;&amp;ldquo;Sparse models&amp;rdquo; in &lt;a href="https://laurentperrinet.github.io/publication/cristobal-perrinet-keil-15-bicv/"&gt;Biologically Inspired Computer Vision&lt;/a&gt;&amp;rsquo;)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-1"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-lp-et-al-2004httpslaurentperrinetgithubiopublicationperrinet-04-tauc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-04-tauc/featured.png" alt="[[LP *et al*, 2004](https://laurentperrinet.github.io/publication/perrinet-04-tauc/)]" loading="lazy" data-zoomable height="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-04-tauc/" target="_blank" rel="noopener"&gt;LP &lt;em&gt;et al&lt;/em&gt;, 2004&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-2"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-olshausen-and-field-1997httpmplabucsdedumarniigertolshaussen_1997pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Olshausen_2.png" alt="[[Olshausen and Field (1997)](http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf)]" loading="lazy" data-zoomable height="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf" target="_blank" rel="noopener"&gt;Olshausen and Field (1997)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-3"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Generative model of image synthesis:&lt;/p&gt;
&lt;p&gt;$I[x, y] = $
&lt;span class="fragment " &gt;
$\sum_{i=1}^{K} a[i] \cdot \phi[i, x, y]$
&lt;/span&gt;
&lt;span class="fragment " &gt;
$ + \varepsilon[x, y]$
&lt;/span&gt;&lt;/p&gt;
&lt;span class="fragment " &gt;
Where $\phi$ is a dictionary of $K$ atoms, $a$ is a sparse vector of coefficients, and $\varepsilon$ is a noise term.
&lt;/span&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2015)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;generative model&lt;/p&gt;
&lt;p&gt;\phi is over-complete (else it is triviallly solved by pseudo inverse)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-4"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-olshausen-and-field-1997httpmplabucsdedumarniigertolshaussen_1997pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Olshausen_1.png" alt="[[Olshausen and Field (1997)](http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf" target="_blank" rel="noopener"&gt;Olshausen and Field (1997)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-5"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
\end{aligned}
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-6"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
&amp;amp; = - \log Pr( I | a ) - \log Pr(a) \\
\end{aligned}
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-7"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
&amp;amp; = - \log Pr( I | a ) - \log Pr(a) \\
&amp;amp; = \frac{1}{2\sigma_n^2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 - \sum_{i=1}^{K} \log Pr( a[i] )
\end{aligned}
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
Probabilistic model
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-8"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;The problem is formalized as an optimization problem $a^\ast = \arg \min_a \mathcal{L}(a)$ with:&lt;/p&gt;
&lt;p&gt;$$
\mathcal{L} = \frac{1}{2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 + \lambda \cdot \sum_i ( a[i] \neq 0)
$$&lt;/p&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2015)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
spiking prior =&amp;gt; l0 pseudo norm
l0 problem is NP-complete
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-9"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;The problem is formalized as an optimization problem $a^\ast = \arg \min_a \mathcal{L}(a)$ with:&lt;/p&gt;
&lt;p&gt;$$
\mathcal{L}(a) = \frac{1}{2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 + \lambda \cdot \sum_{i=1}^{K} | a[i] |
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
exponential prior =&amp;gt; L1 norm
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-10"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-rentzeperis-et-al-2023httpslaurentperrinetgithubiopublicationrentzeperis-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/rentzeperis-23/featured.png" alt="[[Rentzeperis *et al* (2023)](https://laurentperrinet.github.io/publication/rentzeperis-23/)]" loading="lazy" data-zoomable height="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/rentzeperis-23/" target="_blank" rel="noopener"&gt;Rentzeperis &lt;em&gt;et al&lt;/em&gt; (2023)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-11"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-olshausen-and-field-1997httpmplabucsdedumarniigertolshaussen_1997pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Olshausen_5.png" alt="[[Olshausen and Field (1997)](http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf)]" loading="lazy" data-zoomable height="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf" target="_blank" rel="noopener"&gt;Olshausen and Field (1997)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Neural implementation = gradient descent&lt;/p&gt;
&lt;p&gt;LASSO = least absolute shrinkage and selection operator&lt;/p&gt;
&lt;p&gt;Orthogonal Matching Pursuit (OMP): OMP is an iterative algorithm used for sparse signal recovery. It starts with an initial sparse solution and iteratively selects the most correlated dictionary atoms with the residual signal. OMP aims to minimize the L2 norm of the residual while maintaining sparsity. It has a greedy nature and can provide a near-optimal sparse solution.&lt;/p&gt;
&lt;p&gt;Basis Pursuit (BP): Basis Pursuit is an optimization problem that seeks the sparsest solution to an underdetermined linear system of equations. It involves minimizing the L1 norm of the coefficient vector subject to a linear constraint. BP can be solved using linear programming techniques or convex optimization algorithms.&lt;/p&gt;
&lt;p&gt;Iterative Soft Thresholding Algorithm (ISTA): ISTA is an iterative optimization algorithm commonly used in sparse coding. It alternates between a gradient descent step and a soft thresholding step. The gradient descent step minimizes the data fidelity term, and the soft thresholding step enforces sparsity by setting small coefficients to zero. ISTA converges to a sparse solution and can be used for dictionary learning.&lt;/p&gt;
&lt;p&gt;FISTA (Fast Iterative Shrinkage-Thresholding Algorithm): FISTA is an accelerated version of ISTA that improves convergence speed. It incorporates momentum into the optimization process and achieves faster convergence rates.&lt;/p&gt;
&lt;p&gt;ADMM (Alternating Direction Method of Multipliers): ADMM is an optimization technique that decomposes the original problem into smaller subproblems and solves them iteratively. It is often used for convex optimization problems with L1 regularization. ADMM has been applied to solve sparse coding problems efficiently.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;!-- &lt;section style="text-align: left;"&gt; --&gt;
&lt;h2 id="matching-pursuit-algorithm"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : Residual $R = I$, sparse vector $a$ such that $\forall i$, $a[i] = 0$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;instead of finding the exact solution to the approximate problem, let&amp;rsquo;s solve approxiamtltly the exact one&lt;/p&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2010)&lt;/a&gt;]&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-1"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;compute $c[i] = \sum_{x, y} (R[x, y] - a[i] \cdot \phi[i, x, y])^2$&lt;/li&gt;
&lt;li&gt;Match: $i^\ast = \arg \min_i c[i]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
greedy, one by one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-2"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match : $i^\ast = \arg \max_i \sum_{x, y} R[x, y] \cdot \phi[i, x, y]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
use of correlation instead of energy
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-3"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match :
$i^\ast = \arg \max_i \sum_{x, y} ( I[x, y] \cdot \phi[i, x, y])$&lt;/li&gt;
&lt;li&gt;Assign : $a[i^\ast] = \frac{\sum_{x, y} R[x, y] \cdot \phi[i^\ast, x, y]}{\sum_{x, y} \phi[i^\ast, x, y] \cdot \phi[i^\ast, x, y]}$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
use of correlation instead of energy
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-4"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$, and normalize $\sum_{x, y} \phi[i, x, y]^2 = 1$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match : $i^\ast = \arg \max_i \sum_{x, y} R[x, y] \cdot \phi[i, x, y]$&lt;/li&gt;
&lt;li&gt;Assign : $a[i^\ast] = \sum_{x, y} R[x, y] \cdot \phi[i^\ast, x, y]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
use of correlation
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-5"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$, $\sum_{x, y} \phi[i, x, y]^2 = 1$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match : $i^\ast = \arg \max_i \sum_{x, y} R[x, y] \cdot \phi[i, x, y]$&lt;/li&gt;
&lt;li&gt;Assign : $a[i^\ast] = \sum_{x, y} R[x, y] \cdot \phi[i^\ast, x, y]$&lt;/li&gt;
&lt;li&gt;Pursuit : $R[x, y] \leftarrow R[x, y] - a[i^\ast] \cdot \phi[i^\ast, x, y]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
use of correlation
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-6"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$, $\sum_{x, y} \phi[i, x, y]^2 = 1$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;compute $c[i] = \sum_{x, y} R[x, y] \cdot \phi[i, x, y]$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;compute $X[i, j] = \sum_{x, y} \phi[i, x, y] \cdot \phi[j, x, y]$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match : $i^\ast = \arg \max_i c[i]$&lt;/li&gt;
&lt;li&gt;Assign : $a[i^\ast] = c[i^\ast]$&lt;/li&gt;
&lt;li&gt;Pursuit : $c[i] \leftarrow c[i] - a[i^\ast] \cdot X[i, i^\ast] $&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-03-ieee" target="_blank" rel="noopener"&gt;LP (2004)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
use of correlation
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-7"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/publication/perrinet-03-ieee/v1_tiger.gif" width="60%"/&gt;
&lt;aside class="notes"&gt;
ça marche très bien!
&lt;/aside&gt;
---
## Convolutional Sparse Coding --&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2015-05-22-a-hitchhiker-guide-to-matching-pursuit/MPtutorial_rec.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Code @ &lt;a href="https://laurentperrinet.github.io/sciblog/posts/2015-05-22-a-hitchhiker-guide-to-matching-pursuit.html" target="_blank" rel="noopener"&gt;A hitchhiker guide to Matching Pursuit&lt;/a&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-8"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;p&gt;Hebbian learning (once the sparse code is known):&lt;/p&gt;
&lt;p&gt;$$
\phi_{i}[x, y] \leftarrow \phi_{i}[x, y] + \eta \cdot a[i] \cdot (I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi_{i}[x, y] )
$$&lt;/p&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2015)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Unsupervised Learning of the dictionary&lt;/p&gt;
&lt;p&gt;Hebbian learning&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-9"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/ssc.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-12"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-lp-et-al-2004httpslaurentperrinetgithubiopublicationperrinet-04-tauc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-04-tauc/featured.png" alt="[[LP *et al*, 2004](https://laurentperrinet.github.io/publication/perrinet-04-tauc/)]" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-04-tauc/" target="_blank" rel="noopener"&gt;LP &lt;em&gt;et al&lt;/em&gt;, 2004&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="convolutional-sparse-coding"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;this can be integrated in a hierarchy&amp;hellip;&lt;/li&gt;
&lt;li&gt;defining a Convolutional Neural Networks (CNN)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolutional-neural-nets-cnn"&gt;Convolutional Neural Nets (CNN)&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;one layer is a convolution - so let&amp;rsquo;s describe that first&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolutional-neural-nets-cnn-1"&gt;Convolutional Neural Nets (CNN)&lt;/h3&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;One-dimensional &lt;a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" target="_blank" rel="noopener"&gt;discrete convolution&lt;/a&gt; (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and be formalized as a convolution&amp;hellip;&lt;/li&gt;
&lt;li&gt;but what is a convolution?&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s start in 1D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-1"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now in 2D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-2"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cross-correlation&lt;/strong&gt; of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;note the difference between convolutions and cross-correlation&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-3"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it is a translation-invariant feature detector&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-4"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of an image defined on several channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{c,i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we can add different channels to the image (eg colors)&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-5"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of a multi-channel image for multiple output channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[k, x, y] = \sum_{c,i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now we get to the full CNN&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-the-hmax-model"&gt;CNN: the HMAX model&lt;/h3&gt;
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-challenges"&gt;CNN: challenges&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="convolutional-sparse-coding-1"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_b.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;adding a first loop of sparse coding&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-2"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2015-05-22-a-hitchhiker-guide-to-matching-pursuit/MPtutorial_rec.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Code @ &lt;a href="https://laurentperrinet.github.io/sciblog/posts/2015-05-22-a-hitchhiker-guide-to-matching-pursuit.html" target="_blank" rel="noopener"&gt;A hitchhiker guide to Matching Pursuit&lt;/a&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-3"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-lp-2015httpslaurentperrinetgithubiopublicationperrinet-15-bicv"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-15-bicv/featured.png" alt="[[LP, 2015](https://laurentperrinet.github.io/publication/perrinet-15-bicv/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP, 2015&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Code @ &lt;a href="https://nbviewer.org/github/bicv/SparseEdges/blob/master/SparseEdges.ipynb" target="_blank" rel="noopener"&gt;SparseEdges&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;good performance - depends on the size of the input image&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-4"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-ladret-et-al-2024httpslaurentperrinetgithubiopublicationladret-24-sparse"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/ladret-23-iclr/fig_dicos.png" alt="[[Ladret *et al*, 2024](https://laurentperrinet.github.io/publication/ladret-24-sparse/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/ladret-24-sparse/" target="_blank" rel="noopener"&gt;Ladret &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;heterogeneity is important&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-5"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_c.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-6"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/SDPC_3.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result on MNIST&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-1"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure4a.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-2"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure4b.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-3"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-4"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/training_video_ATT.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-topography"&gt;CNN: Topography&lt;/h3&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;topography?&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-topography-1"&gt;CNN: Topography&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= bio-mimetism&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="sparse-representations-3"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2025-03-11-phd-program-sparse-representations/?transition=fade" target="_blank" rel="noopener"&gt;Sparse representations&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2025-03-11-phd-program-sparse-representations/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="neuroschool-phd-program-in-neuroscience-1"&gt;&lt;u&gt;&lt;a href="https://neuro-marseille.org/en/training/phd-program/" target="_blank" rel="noopener"&gt;NeuroSchool PhD Program in Neuroscience&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2025-03-11-1"&gt;[2025-03-11]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;a href="https://github.com/laurentperrinet/2024-04_sparse-representations" target="_blank" rel="noopener"&gt;Code&lt;/a&gt; /
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s sparse!&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2025-02-14-supaero</title><link>https://laurentperrinet.github.io/slides/2025-02-14-supaero/</link><pubDate>Fri, 14 Feb 2025 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2025-02-14-supaero/</guid><description>&lt;section&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2025-02-14-supaero/?transition=fade"&gt;
&lt;h2&gt;Qu'est-ce que les &lt;i&gt;Neurosciences&lt;/i&gt; peuvent apporter à l'&lt;i&gt;Intelligence Artificielle&lt;/i&gt; ?&lt;/h2&gt;
&lt;/a&gt;
&lt;br&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="ANR" width="98%"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th&gt;
[2025-02-14] Airbus Helicopters&lt;br&gt;
&lt;i&gt; Laurent Perrinet &lt;/i&gt; &amp;horbar;
&lt;a href="https://laurentperrinet.github.io"&gt;https://laurentperrinet.github.io&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="10%" width="10%"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;outline =&lt;/li&gt;
&lt;li&gt;Bonjour. Je suis Laurent Perrinet, directeur de recherche CNRS en neurosciences computationnelles à l&amp;rsquo;Institut des neurosciences de la Timone à Marseille. Je vous remercie pour cette invitation à participer à cette journée conviviale.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://laurentperrinet.github.io/publication/perrinet-03-these/jury.jpg"
&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Mais que fait un neuroscientifique à Airbus Helictopters?&lt;/p&gt;
&lt;p&gt;Je suis moi-même un passionné d&amp;rsquo;aéronautique et de spatial, ce qui m&amp;rsquo;a amené à suivre l&amp;rsquo;école d&amp;rsquo;aéronautique SUPAERO. Puis vers l’imagerie satellitaire, qui dépendait déjà de l&amp;rsquo;IA sous la forme des réseaux de neurones. C&amp;rsquo;est à partir de là, grâce à la rencontre avec mon professeur de mathématiques Manuel Samuelides, que j&amp;rsquo;ai découvert les neurosciences computationnelles et les pouvoirs qu&amp;rsquo;elles peuvent offrir pour mieux comprendre le cerveau et pour créer de nouveaux systèmes d’intelligence artificielle. Voici un&lt;/p&gt;
&lt;p&gt;Le jury était consistué (de gauche à droite) de Jeanny Hérault (Rapporteur), Michel Imbert (Président), Yves Burnod (Rapporteur, absent de la photo), Manuel Samuelides (Directeur de thèse) et Simon Thorpe (Co-directeur de thèse).&lt;/p&gt;
&lt;p&gt;Depuis ce temps là, je développe des &lt;strong&gt;réseaux de neurones&lt;/strong&gt; concus comme des algorithmes / processus d&amp;rsquo;optimisation numérique, que j&amp;rsquo;applique pour le traitement automatisé des images. une optique nouvelle n&amp;rsquo;est pas simplement d&amp;rsquo;utiliser l&amp;rsquo;inspiration neuro-mimétique mais de faire des aller retours avec l&amp;rsquo;expérimentation&lt;/p&gt;
&lt;p&gt;mais d&amp;rsquo;abord quid AI ?&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="lintelligence-artificielle-est-elle-intelligente-"&gt;L&amp;rsquo;intelligence artificielle est-elle &amp;ldquo;intelligente&amp;rdquo; ?&lt;/h2&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://laurentperrinet.github.io/talk/2025-02-14-supaero/flying-AI_916750.png"
&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://laurentperrinet.github.io/talk/2025-02-14-supaero/clippy_AI_apocalypse.jpg"
&gt;
&lt;hr&gt;
&lt;h2 id="lintelligence-artificielle-ia-est-elle-intelligente-"&gt;L&amp;rsquo;intelligence artificielle (IA) est-elle &amp;ldquo;intelligente&amp;rdquo; ?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;L&amp;rsquo;IA est une science multi-disciplinaire qui vise à créer des machines capables d&amp;rsquo;exécuter des tâches intelligentes, similaires à celles effectuées par l&amp;rsquo;être humain.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;années 1950-1970 : approches logiques et symboliques&lt;/li&gt;
&lt;li&gt;années 1980-2010 : machine learning (apprentissage automatique)&lt;/li&gt;
&lt;li&gt;années 2010-2020 : deep learning&lt;/li&gt;
&lt;li&gt;années 2020-&amp;hellip; : la révolution des transformers&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;L&amp;rsquo;intelligence artificielle, plus précisément l&amp;rsquo;apprentissage profond, a fait d&amp;rsquo;énormes progrès ces dernières années. Toutefois, deux obstacles majeurs subsistent pour son adoption dans les systèmes embarqués ou la robotique.&lt;/p&gt;
&lt;p&gt;perceptron de Rosenblatt (1957) et le néocognitron de Fukushima (1980)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Je suis convaincu que nous sommes au tournant d&amp;rsquo;une nouvelle ère dans le développement des systèmes embarqués, où l&amp;rsquo;intelligence artificielle a le potentiel de créer des innovations disruptives à la hauteur des performances de l’intelligence naturelle et pour lesquelles il est essentiel de s&amp;rsquo;inspirer des neurosciences biologiques.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="lintelligence-artificielle-est-elle-intelligente--1"&gt;L&amp;rsquo;intelligence artificielle est-elle &amp;ldquo;intelligente&amp;rdquo; ?&lt;/h2&gt;
&lt;figure id="figure-sommet-de-lia-de-2025"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.notretemps.com/1400x787/smart/2025/02/11/lombre-de-musk-plane-sur-le-sommet-ia-de-paris.jpg" alt="Sommet de l&amp;#39;IA de 2025" loading="lazy" data-zoomable width="85%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sommet de l&amp;rsquo;IA de 2025
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;impact social&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;sécurité&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;souveraineté&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="enjeux-de-lia-embarquée--latence-de-réponse"&gt;Enjeux de l&amp;rsquo;IA embarquée : latence de réponse&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-grimaldi-et-al-2022httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies [[Grimaldi *et al*, 2022]](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Tout d’abord, les systèmes sensoriels biologiques sont composés de séquences de traitement qui possèdent des délais de traitement. Je décris ici la chaîne de traitement d’une image visuelle, ici pour un enfant jouant à un jeu et devant cliquer sur le bon bouton, et qui illustre les différentes latences du traitement de l’information de la vision à l’action.&lt;/p&gt;
&lt;p&gt;Si les délais dans un système embarqué sont plus rapides, il reste que les informations dans les différentes étapes de traitement peuvent être décalées et nécessitent un traitement adapté afin de répondre de la façon la plus immédiate possible. Je pense notamment à la détection d&amp;rsquo;objets en mouvement très rapide dans le cadre spatial.&lt;/p&gt;
&lt;p&gt;Tout d&amp;rsquo;abord, la plupart de ces systèmes traitent des données statiques. Ils ignorent notamment l&amp;rsquo;aspect dynamique, comme la nécessité de pouvoir répondre à tout moment ou de compenser les délais de traitement.
Dans un premier temps, je présenterai un nouveau type de caméra, inspirée du fonctionnement de la rétine et du codage neural par potentiels d&amp;rsquo;actions ou « spikes ». Ces caméras permettent de capturer l&amp;rsquo;information sous forme d&amp;rsquo;événements et nécessitent d&amp;rsquo;adapter les algorithmes de traitement de l&amp;rsquo;information, qui sont plus proches de ceux utilisés par le cerveau.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="enjeux-de-lia-embarquée--budget-énergétique"&gt;Enjeux de l&amp;rsquo;IA embarquée : budget énergétique&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/sciblog/files/2016-04-28_mejanes/figures/power.png" alt="" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Deuxième contrainte liée à la première : la consommation énergétique.&lt;/p&gt;
&lt;p&gt;Sedol en 2016 - &lt;a href="https://en.wikipedia.org/wiki/AlphaGo" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/AlphaGo&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ensuite, ces systèmes sont souvent très gourmands en énergie, ce qui les rend incompatibles avec les systèmes embarqués. Dans cette présentation, j&amp;rsquo;aborderai l&amp;rsquo;importance de l&amp;rsquo;interaction entre les neurosciences et l&amp;rsquo;intelligence artificielle, ainsi que la manière dont ces deux domaines peuvent s&amp;rsquo;enrichir mutuellement pour accroître leur efficacité.&lt;/p&gt;
&lt;p&gt;Dans un second temps, je présenterai comment l&amp;rsquo;aspect temporel de ce signal peut être mis à profit pour des applications de vision par ordinateur efficaces et peu gourmandes en énergie, particulièrement adaptées à la robotique.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="lirraisonnable-efficacité-de-la-vision"&gt;&amp;ldquo;L&amp;rsquo;irraisonnable efficacité de la vision&amp;rdquo;&lt;/h2&gt;
&lt;figure id="figure-comment-la-vision-a-évolué-perrinet-2024httpstheconversationcomchats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://images.theconversation.com/files/568221/original/file-20240108-17-78s0cj.png" alt="Comment la vision a évolué... [[Perrinet, 2024]](https://theconversation.com/chats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083) " loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Comment la vision a évolué&amp;hellip; &lt;a href="https://theconversation.com/chats-mouches-humains-comment-la-vision-a-evolue-en-de-multiples-facettes-220083" target="_blank" rel="noopener"&gt;[Perrinet, 2024]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Visual illusions are a great way to understand the constraints of vision&lt;/li&gt;
&lt;li&gt;notce that here the illusion depend on your eye movements&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a simpler one showing effect of context&lt;/li&gt;
&lt;li&gt;here the ever changing lighting conditions from moonlight (1 candela) to sunlight (100 000 candela)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;the process of inverting the reason of an illusion can be intriguing&lt;/li&gt;
&lt;li&gt;hering: two parallel lines&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;appear bent&lt;/li&gt;
&lt;li&gt;effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;more generally it reveals vision generates a model of the world&lt;/li&gt;
&lt;li&gt;pareidolia: seeing faces in clouds, or a man on mars&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;30 years later&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="illusions-visuelles--paréidolie-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%c3%a9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip; it&amp;rsquo;s just a rock&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="neurosciences-computationnelles-de-la-vision"&gt;Neurosciences computationnelles de la vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Les neurosciences computationnelles sont les sciences qui essaient d’extraire de nos connaissances en neurosciences biologiques des principes computationnels, comme le neurone formel et sa capacité d’apprentissage, qui est la brique de base des réseaux de neurones. Ces derniers ont conduit à la révolution de l’IA avec les réseaux profonds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;it&amp;rsquo;s a multi-scale, complex model&amp;hellip;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;perhaps we will never be able to comprehend it in full&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;words are not precise enough, let&amp;rsquo;s use mathematics and models to describe this system&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="anatomie-du-système-visuel-humain"&gt;Anatomie du système visuel humain&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s start with the anatomy&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;!--
---
## Système visuel humain : le modèle HMAX
&lt;figure id="figure-serre-and-poggio-2007"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.researchgate.net/profile/Thomas-Serre/publication/253467382/figure/fig1/AS:298143448092675@1448094345807/a-Organization-of-the-visual-cortex-The-diagram-is-modified-from-Gross-1998-Key.png" alt="[Serre and Poggio, 2007]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Serre and Poggio, 2007]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and a model of it&amp;hellip;(&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;CNN, the mother of all deep learning models&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h2 id="cortex-visuel-primaire"&gt;Cortex visuel primaire&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s zoom in, the basic ingredient is the receptive field&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cortex-visuel-primaire-1"&gt;Cortex visuel primaire&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="modèles-hybrides-dia"&gt;Modèles hybrides d&amp;rsquo;IA&lt;/h2&gt;
&lt;figure id="figure-using-goal-driven-deep-learning-models-to-understand-sensory-cortex-yamins--dicarlo-2016"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://knu-brainai.github.io/images/cnn.png" alt="Using goal-driven deep learning models to understand sensory cortex [Yamins &amp; DiCarlo, 2016] " loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Using goal-driven deep learning models to understand sensory cortex [Yamins &amp;amp; DiCarlo, 2016]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Nouvelles caméras : basées sur la même technologie qu’un CMOS, mais au lieu de récolter à intervalles réguliers l’ensemble des valeurs de luminance sur tous les pixels, chaque pixel est indépendant.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;le mode de représentation de l&amp;rsquo;information est différent : le signal consiste à émettre un événement si et seulement si un changement a été observé par ce pixel, ce qui est représenté ici par ces flux d’événements.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-1"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;p&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_arm-roll.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_hand-clap.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_air-guitar.webp" width="33%"/&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-2"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sensor&lt;/th&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Framerate&lt;/th&gt;
&lt;th&gt;Resolution&lt;/th&gt;
&lt;th&gt;Power&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Human eye&lt;/td&gt;
&lt;td&gt;60 (?) dB&lt;/td&gt;
&lt;td&gt;300 (?) fps&lt;/td&gt;
&lt;td&gt;100 (?) Mpx&lt;/td&gt;
&lt;td&gt;10 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DSLR&lt;/td&gt;
&lt;td&gt;44.6 dB&lt;/td&gt;
&lt;td&gt;120 fps&lt;/td&gt;
&lt;td&gt;2&amp;ndash;20 Mpx&lt;/td&gt;
&lt;td&gt;30 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ultra-high speed&lt;/td&gt;
&lt;td&gt;64 dB&lt;/td&gt;
&lt;td&gt;10^4 fps&lt;/td&gt;
&lt;td&gt;0.3&amp;ndash;4 Mpx&lt;/td&gt;
&lt;td&gt;300 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event-based&lt;/td&gt;
&lt;td&gt;120 dB&lt;/td&gt;
&lt;td&gt;10^6 fps&lt;/td&gt;
&lt;td&gt;0.1&amp;ndash;2 Mpx&lt;/td&gt;
&lt;td&gt;30 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Les caméras événementielles présentent plusieurs propriétés qui les rendent remarquables. Tout d&amp;rsquo;abord, la précision temporelle des événements est de l&amp;rsquo;ordre de la microseconde, ce qui permet d&amp;rsquo;atteindre une cadence théorique de l&amp;rsquo;ordre du million d&amp;rsquo;images par seconde. On peut la comparer à celle d&amp;rsquo;une caméra classique, qui est de l&amp;rsquo;ordre de la centaine d&amp;rsquo;images par seconde, ou à celle d&amp;rsquo;une caméra à grande vitesse, qui peut atteindre 10 000 images par seconde. Il est difficile d&amp;rsquo;estimer la fréquence d&amp;rsquo;échantillonnage de la perception humaine, car si 25 images par seconde sont souvent suffisantes pour visionner un film, il a été démontré que l&amp;rsquo;œil humain peut distinguer des détails temporels jusqu&amp;rsquo;à la milliseconde.&lt;/p&gt;
&lt;p&gt;Une autre caractéristique importante de ces caméras est leur capacité à détecter une très large gamme de luminosité, dépassant de loin celle des caméras conventionnelles à 120 dB (un facteur d&amp;rsquo;un million, comparé au facteur de un sur mille de l&amp;rsquo;œil humain entre la pleine lune et le soleil),&lt;/p&gt;
&lt;p&gt;Il convient de noter que la « résolution spatiale » de ces caméras est souvent relativement modeste, de l&amp;rsquo;ordre du mégapixel. Cependant, il ne s&amp;rsquo;agit pas d&amp;rsquo;une limitation technique, mais plutôt d&amp;rsquo;une conséquence des applications technologiques dans lesquelles ces caméras sont couramment utilisées.&lt;/p&gt;
&lt;p&gt;Par rapport aux caméras classiques, qui consomment plusieurs watts, les caméras événementielles consomment très peu d&amp;rsquo;énergie électrique, de l&amp;rsquo;ordre de 10 milliwatts, soit une consommation équivalente à celle de l&amp;rsquo;œil humain.
&lt;a href="https://en.wikipedia.org/wiki/Event_camera#Functional_description" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Event_camera#Functional_description&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-3"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-4"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-5"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-loihi-2"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg" alt="Loihi 2" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Loihi 2
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-6"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network-grimaldi-et-al-2023httpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network [[Grimaldi *et al*, 2023]](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Ces caméras ne présentent que des avantages, mais alors, comment traiter cette nouvelle représentation des données ? En effet, les neurosciences montrent que les neurones ne manipulent pas des données continues (comme ceux du deep learning), mais communiquent exactement de la même manière en échangeant de brèves impulsions prototypiques, les potentiels d’action (spikes).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Notre solution : une architecture similaire au deep learning, mais chaque neurone (brique élémentaire) est un modèle simplifié de neurone biologique impulsionnel. Cependant, nous nous retrouvons avec un problème par rapport à l’établissement que nous avons réussi à résoudre théoriquement. Un avantage supplémentaire est que ce genre de calcul est actuellement développé sur des puces embarquées (comme les pixels de la caméra évanementielle).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;notre architecture fonctionne ainsi directement sur cette même représentation. Un autre avantage : le « always on computing ».&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Quels résultats ? Peut-on les évaluer avant d&amp;rsquo;avoir ces puces ?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-7"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network-grimaldi-et-al-2023httpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_raw.svg" alt="The HD-SNN neural network [[Grimaldi *et al*, 2023]](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-8"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network-grimaldi-et-al-2023httpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_shortening.svg" alt="The HD-SNN neural network [[Grimaldi *et al*, 2023]](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-9"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network-grimaldi-et-al-2023httpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy.svg" alt="The HD-SNN neural network [[Grimaldi *et al*, 2023]](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Time-to-Contact maps &lt;a href="https://laurentperrinet.github.io/publication/nunes-23-iccv" target="_blank" rel="noopener"&gt;[Nunes &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Nos simulations montrent ainsi une très grande efficacité (ici pour catégoriser un type de flux optique, ce qui peut guider la navigation).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;un aspect innovant de notre technologie réside dans notre capacité à utiliser autant de neurones, mais moins de connexions. Nous avons par ailleurs montré que l’efficacité restait acceptable. Par rapport à une technologie classique (en orange) qui montre une baisse rapide, nos résultats montrent une bonne efficacité avec une demi-valeur critique donnée pour un gain de 700x (noter l’axe log). C’est ce qu’on appelle le « frugal computing » et nous œuvrons maintenant à son implémentation dans un PEPR IA.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;c’est une étape importante, mais on peut aller plus loin, et je vais vous présenter un deuxième levier : éviter de tout traiter pour ne traiter que ce qui est nécessaire.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="levier-2-vision-active--active-vision"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-24-ccn/featured.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Pour cela, je vais d’abord l’illustrer par le travail du chercheur russe Yarbus au début du siècle dernier. Lorsqu’on présente une scène visuelle à un observateur (comme dans le cas de cette peinture sur le panneau A) – celui-ci va effectuer une série de sauts dans cette image, qu’on appelle saccades.&lt;/p&gt;
&lt;p&gt;En effet, notre vision possède cette propriété d’être focalisée, de telle sorte qu’une majeure partie de notre vision est concentrée suivant notre axe de vision. Cette propriété a co-évolué avec la capacité à effectuer des mouvements rapides des yeux et confère un avantage évolutif aux prédateurs qui peuvent agir plus rapidement sur leur environnement pour attraper une proie.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-1"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2018"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.researchgate.net/profile/Jose-Manuel-Alonso/publication/325517455/figure/fig6/AS:968126468476930@1607830745875/Cortical-map-for-retinotopy-a-d-Visual-fields-and-their-cortical-representation-in_W640.jpg" alt="[Kremkow *et al*, 2018]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Kremkow &lt;em&gt;et al&lt;/em&gt;, 2018]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-2"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/featured.jpg" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25/)]" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25/" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Cette capacité d’agir sur l’entrée sensorielle, et notamment d’avoir une capacité attentionnelle de cette sorte, est largement absente des approches classiques de l’apprentissage machine et nous avons pu l’implanter grâce au projet ANR.&lt;/p&gt;
&lt;p&gt;Pour cela, nous avons utilisé une transformée de type log-polaire qui concentre l’information autour de l’axe de vision, comme on peut le voir à l’intérieur de la zone matérialisée par la zone grise. Notez également l’importance du point sur lequel se pose le regard, notamment s&amp;rsquo;il est éloigné ou proche de l’objet d’intérêt.&lt;/p&gt;
&lt;/aside&gt;
&lt;pre&gt;&lt;code&gt;---
## Levier #2: Vision active / *Active Vision*
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/grid.gif" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-3"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/fig_attack_rotation_imagenet.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;de façon surprenante, malgré la perte de résolution en périphérie, nous obtenons des résultats comparables à l’état de l’art, mais plus robustes aux rotations et zooms.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;il est important de noter qu’il peut traiter des images arbitraires en taille, ce qui constitue une limite importante des CNNs actuels.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Une perspective en cours est d’abord d’adapter cette capacité aux SNN, mais aussi&amp;hellip;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-4"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/multi_label.jpg" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-5"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/fig_areadne.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;d’inclure des saccades, c’est-à-dire de compléter le système que je viens de présenter et qui permet d’identifier des objets dans une image, par un système qui permet d’anticiper ou de regarder dans une image.
Cette division du travail est inspirée des voies pariétales et dorsales du système visuel chez l&amp;rsquo;être humain.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;PEPR IA : les multiples saccades et l&amp;rsquo;attention&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;comment intégrer ces deux leviers dans un système embarqué ?&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2025-02-14-supaero/?transition=fade"&gt;
&lt;h2&gt;Qu'est-ce que les &lt;i&gt;Neurosciences&lt;/i&gt; peuvent apporter à l'&lt;i&gt;Intelligence Artificielle&lt;/i&gt; ?&lt;/h2&gt;
&lt;/a&gt;
&lt;br&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="ANR" width="98%"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th&gt;
[2025-02-14] Airbus Helicopters&lt;br&gt;
&lt;i&gt; Laurent Perrinet &lt;/i&gt; &amp;horbar;
&lt;a href="https://laurentperrinet.github.io"&gt;https://laurentperrinet.github.io&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="10%" width="10%"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;résumé : l&amp;rsquo;IA embarquée implique des enjeux importants.&lt;/li&gt;
&lt;li&gt;les neurosciences peuvent apporter une contribution majeure pour résoudre les enjeux de l&amp;rsquo;IA embarquée - &lt;strong&gt;importance de la recherche fondamentale&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;un objectif : acquérir une indépendance scientifique = projet « Active Loop » pour lequel je cherche des partenaires.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2025-02-11-neuromath</title><link>https://laurentperrinet.github.io/slides/2025-02-11-neuromath/</link><pubDate>Tue, 11 Feb 2025 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2025-02-11-neuromath/</guid><description>&lt;section&gt;
&lt;h2&gt;&lt;u&gt;
[2025-02-11] When Cortical Neurons Talk Sideways: Beyond Feedforward Visual Processing
&lt;/u&gt;&lt;/h2&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;!-- &lt;a href="https://laurentperrinet.github.io/grant/anr-anr"&gt; --&gt;
&lt;img src="https://laurentperrinet.github.io/grant/polychronies/featured.png" alt="header" height="300"&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/post/2019-06-22_ardemone/featured.png" alt="header" height="300"&gt;
&lt;/a&gt;--&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2025-02-11-neuromath/?transition=fade"&gt; &lt;i&gt; Laurent Perrinet &lt;/i&gt; &lt;/a&gt; - &lt;a href="https://laurentperrinet.github.io"&gt;https://laurentperrinet.github.io&lt;/a&gt;
&lt;br&gt;
Séminaire Neuromathématiques, &lt;b&gt;Collège de France&lt;/b&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Hi, thanks for the introduction! I am Laurent Perrinet, a researcher in computational neuroscience and currently a research director at CNRS at the Institute of Neuroscience of la Timone in Marseille. &lt;strong&gt;Thank you&lt;/strong&gt; for inviting me to participate in this &amp;ldquo;NeuroMathematics&amp;rdquo; seminar at the intersection of mathematics and neuroscience.&lt;/p&gt;
&lt;p&gt;As an engineer by training, I could have pursued a career in aeronautics rather than becoming a neuroscientist. It is thanks to my mathematics professor &lt;strong&gt;Manuel Samuelides&lt;/strong&gt; that I discovered the beauty of neural networks at the end of my engineering studies. This developped a curiosity, and thanks to him, I was also able to study in a mastere of cognitive sciences (now called CogMaster) in 1998. This is where I particularly want to acknowledge &lt;strong&gt;Jean Petitot&lt;/strong&gt; - for his course I discovered how natural image statistics could link to principles in the central nervous system. This was a vivid revelation, and I&amp;rsquo;m grateful for his guidance in my academic path. Today&amp;rsquo;s seminar represents a return to these roots, as I&amp;rsquo;ll present my research progress since my mastere thesis on this very topic.&lt;/p&gt;
&lt;p&gt;Today, I will address our current knowledge about &lt;strong&gt;horizontal connectivity rules in V1&lt;/strong&gt;. Why is this important? As a matter of fact, one main function of sensory systems, such as the pivotal role of the primary visual cortex for vision, is to bind together the different visual features to help ultimately build a global perception.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg"
&gt;
&lt;!-- &lt;img src="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg" height="420"/&gt; --&gt;
&lt;!-- [Paysage catalan (Le Chasseur) [Joan Miró, 1924]](https://fr.wikipedia.org/wiki/Paysage_catalan_(Le_Chasseur)) --&gt;
&lt;table&gt;
&lt;tr &gt;
&lt;th&gt;
&lt;a href ="https://fr.wikipedia.org/wiki/Paysage_catalan_(Le_Chasseur)"&gt;Paysage catalan (Le Chasseur), &lt;i&gt;Joan Miró&lt;/i&gt; (1924)&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;tr style="height:600px;"&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;to rephrase the expression &lt;a href="https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences" target="_blank" rel="noopener"&gt;&amp;ldquo;The Unreasonable Effectiveness of Mathematics&amp;rdquo;&lt;/a&gt; by Wigner, the &amp;ldquo;Unreasonable efficiency of vision&amp;rdquo; is playfully illustrated in this painting from Joan Miró, which allows us to depict this Catalan landscape with the a few strokes where our imagination will fill the gaps and signify the landscape, allowing us to imagine the hunter, the sardine or the plane.&lt;/p&gt;
&lt;p&gt;This is so striking that lines or contours may appear even when they do not exist, such as in this display created with the visual artist Etienne Rey (beware! it will likely tickle your eyes).&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://laurentperrinet.github.io/post/2018-04-10_trames/featured.png"
&gt;
&lt;table&gt;
&lt;tr &gt;
&lt;th&gt;
&lt;a href ="https://laurentperrinet.github.io/post/2018-04-10_trames/"&gt;Trames (Etienne Rey)&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;tr style="height:600px;"&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
With only dots arranged in two hexagonal grids simply shifted by an anagle of 9°, we still see lines, such as a lower-frequency hexagonal grid, and even an illusion of depth. Notice how this illusion depends on the position of your eye and therefore of your retina. Can we make sense of these phenomena?
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="contour-detection-and-the-association-field"&gt;Contour detection and the Association Field&lt;/h2&gt;
&lt;figure id="figure-field-et-al-1993"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Field1993Fig3B.jpg" alt="[Field *et al*, 1993]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Field &lt;em&gt;et al&lt;/em&gt;, 1993]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This percept of continuity was previously already framed in the &lt;strong&gt;Gestalt&lt;/strong&gt; paradigm and was further developed into a quantitative framework. This seminal work by Field, Hayes and Hess in 1993 demonstrated that observers were better at detecting contours formed by aligned Gabor patches compared to randomly oriented ones. Like how a contour may preferentially emerge in a dense field of edges.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="contour-detection-and-the-association-field-1"&gt;Contour detection and the Association Field&lt;/h2&gt;
&lt;figure id="figure-field-et-al-1993"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Field1993Fig3.jpg" alt="[Field *et al*, 1993]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Field &lt;em&gt;et al&lt;/em&gt;, 1993]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Their psychophysical experiments showed that detection performance was best when elements were co-aligned and degraded systematically as the relative orientation between elements increased. This highlighted significant edge parameters such a relative orientation, distance, but not phase.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="contour-detection-and-the-association-field-2"&gt;Contour detection and the Association Field&lt;/h2&gt;
&lt;figure id="figure-field-et-al-1993"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/AssoFieldNoBosking.png" alt="[Field *et al*, 1993]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Field &lt;em&gt;et al&lt;/em&gt;, 1993]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Consequently, they proposed that this perceptual grouping relies on an &amp;ldquo;association field&amp;rdquo; - a hypothetical linking mechanism that preferentially connects neurons tuned to similar orientations.
But where does this association field comes from ?
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="natural-images--edges-are-on-a-common-circle"&gt;Natural Images : Edges are on a common circle&lt;/h2&gt;
&lt;figure id="figure-sigman-et-al-2001"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Sigman2001Fig4.jpg" alt="[Sigman *et al*, 2001]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Sigman &lt;em&gt;et al&lt;/em&gt;, 2001]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;A significant contribution to understanding the association field came from studying &lt;strong&gt;edge co-occurrences in natural images&lt;/strong&gt; by Sigman et al. (2001). They quantified the probability density function of edge co-occurrences based on their relative positions and orientations. The figure demonstrates this by showing the spatial distribution patterns for edges relative to a reference edge at different orientations. For iso-oriented edges (a), the co-occurrence pattern shows clear structure. As the relative orientation increases through 22.5° (b), 45° (c), 67.5° (d), to 90° (e), distinct spatial patterns emerge.&lt;/p&gt;
&lt;p&gt;A key finding was that for any given relative orientation between edges, the angle of maximal interaction occurs at the bisector between the orientations. This suggests that &lt;strong&gt;co-occurring edges tend to lie on a common circle&lt;/strong&gt; - a property known as cocircularity. Panel (f) illustrates this geometrical principle: given two edges at angles w (red, 20°) and c (blue, 40°), the cocircularity solutions (green lines at 30° and 120°) represent the possible orientations of connecting circular arcs. This mathematical relationship provides insights into how the visual system might leverage statistical regularities in natural scenes for contour integration. We will go back into the details of this a bit further in the talk.&lt;/p&gt;
&lt;p&gt;This association field concept provided a compelling framework for understanding how the visual system may implement contour integration through neural connectivity patterns. but before going there we should go back to the &lt;strong&gt;basic anatomy of the visual cortex&lt;/strong&gt;.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="contour-detection-and-the-association-field-3"&gt;Contour detection and the Association Field&lt;/h2&gt;
&lt;figure id="figure-field-et-al-1993"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/AssoFieldNoBosking.png" alt="[Field *et al*, 1993]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Field &lt;em&gt;et al&lt;/em&gt;, 1993]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-human-visual-system-grimaldi-et-al-2022httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency_bg.jpg" alt="Human Visual system ([Grimaldi *et al* 2022](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Human Visual system (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Grimaldi &lt;em&gt;et al&lt;/em&gt; 2022&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;&amp;lt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Let&amp;rsquo;s begin with the &lt;strong&gt;anatomy&lt;/strong&gt; of the visual system.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-human-visual-system"&gt;Anatomy of the Human Visual system&lt;/h2&gt;
&lt;figure id="figure-human-visual-system-grimaldi-et-al-2022httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Human Visual system ([Grimaldi *et al* 2022](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Human Visual system (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Grimaldi &lt;em&gt;et al&lt;/em&gt; 2022&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The diagram shows the human visual pathways, where information flows from the &lt;strong&gt;retina&lt;/strong&gt; through the optic nerve to reach the lateral geniculate nucleus in the thalamus. From there, signals project to the &lt;strong&gt;primary visual cortex&lt;/strong&gt; (V1) where neurons are selective to local oriented edges. Information then proceed through higher visual areas following two main streams - the ventral &amp;ldquo;what&amp;rdquo; pathway (which I show here) and the dorsal &amp;ldquo;where/how&amp;rdquo; pathway. This hierarchical organization allows for increasingly complex visual processing, ultimately enabling motor responses and behavior. The &lt;strong&gt;latencies&lt;/strong&gt; shown in the figure indicate the sequential timing of neural activation across these processing stages.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="thalamic-short---long-range-lateral-inter-areal"&gt;Thalamic, short- &amp;amp; long-range lateral, inter-areal&lt;/h2&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-07-neurocomp/featured.png" alt="" loading="lazy" data-zoomable height="200" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/cortical-columns_a_02_cl_vis_3e.jpg" alt="" loading="lazy" data-zoomable height="150" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/cortical-columns.jpg" alt="" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;A key feature of primary visual cortex is its &lt;strong&gt;layered organization&lt;/strong&gt;, which is shared across cortical areas. The main thalamic input arrives in layer 4, which connects to a dense network of vertical connections across layers. These columns can then communicate via horizontal connections within layers.&lt;/p&gt;
&lt;p&gt;Hubel and Wiesel also proposed the &lt;strong&gt;ice-cube model&lt;/strong&gt; that every point in the visual field produces a response in a 2 mm x 2 mm area of the cortex. Such an area can contain two complete groups of ocular dominance columns, 16 blobs and interblobs that may contain more than two times all of the orientations possible across 180 degrees. This region of the cortex, which Hubel and Wiesel called a hypercolumn (or, more generally, a cortical module) seems both necessary and sufficient for analyzing the image of a point in visual space. Because the cortex is a continuous cellular layer and because it is very hard to establish the boundaries of these modules physically, their existence from a functional standpoint is still the subject of debate.
&lt;a href="https://thebrain.mcgill.ca/flash/a/a_02/a_02_cl/a_02_cl_vis/a_02_cl_vis.html" target="_blank" rel="noopener"&gt;https://thebrain.mcgill.ca/flash/a/a_02/a_02_cl/a_02_cl_vis/a_02_cl_vis.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Figure 9.2. Hypercolumn Diagram. Ocular dominance columns are segregated into left and right eye inputs. Orientation columns are neurons that get excited at different orientations and a cluster of these is called a pinwheel. Blobs are color selective and for every pinwheel there is a blob. (Credit: McGill: The Brain from Top to Bottom, Figure of hypercolumns, Copyleft &lt;a href="https://copyleft.org/" target="_blank" rel="noopener"&gt;https://copyleft.org/&lt;/a&gt;, &lt;a href="https://thebrain.mcgill.ca/flash/a/a_02/a_02_cl/a_02_cl_vis/a_02_cl_vis.html" target="_blank" rel="noopener"&gt;https://thebrain.mcgill.ca/flash/a/a_02/a_02_cl/a_02_cl_vis/a_02_cl_vis.html&lt;/a&gt;. No modifications.)&lt;/p&gt;
&lt;p&gt;From: &lt;a href="https://pressbooks.umn.edu/sensationandperception/chapter/columns-and-hypercolumns-in-v1/" target="_blank" rel="noopener"&gt;https://pressbooks.umn.edu/sensationandperception/chapter/columns-and-hypercolumns-in-v1/&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="thalamic-short---long-range-lateral-inter-areal-1"&gt;Thalamic, short- &amp;amp; long-range lateral, inter-areal&lt;/h2&gt;
&lt;figure id="figure-markov-et-al-2011"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Markov2011Fig2_cercorbhq201f02_ht.jpg" alt="[Markov *et al* 2011]" loading="lazy" data-zoomable height="380" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Markov &lt;em&gt;et al&lt;/em&gt; 2011]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This figure from Markov et al. (2011) quantifies intrinsic connectivity patterns in macaque V1 through retrograde tracer injections. The data shows that 85% of connections are intra-areal, with connection density decreasing exponentially with distance (characteristic length ~0.23mm). Most connections (80%) remain within 1.5mm radius - notably close given the ~0.5mm spacing between orientation pinwheels. This provides strong evidence that the vast majority of inputs to V1 neurons come from within V1 itself rather than from other areas, suggesting local processing plays a dominant role in V1 computation.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-primary-visual-cortex"&gt;Anatomy of the Primary Visual Cortex&lt;/h2&gt;
&lt;figure id="figure-kaschube-et-al-2010"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Kaschube2010Fig1.jpg" alt="[Kaschube *et al* (2010)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Kaschube &lt;em&gt;et al&lt;/em&gt; (2010)]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;V1 is central to these pathways and shows distinctive anatomical and functional properties along with a complex topographical organization.&lt;/p&gt;
&lt;p&gt;This figure from Kaschube et al. (2010) illustrates the &lt;strong&gt;organization of orientation preference maps&lt;/strong&gt; in primary visual cortex (V1).
Individual V1 neurons exhibit selective responses to oriented visual stimuli (as denoted by varying hues Colors code preferred ORs as indicated by the bars in (C)), with their spatial arrangement following highly structured patterns across the cortical surface.
Panel B shows Synthetic orientation-maps of equal column spacing Λ but widely different pinwheel densities ρ. Left to right: solutions of different models: (13–16).. (C) High (blue frame) and low (orange frame) pinwheel density regions in tree shrew visual cortex. (D to F), Optically recorded orientation-maps in tree shrew (D), galago (E), and ferret (F) visual cortex. Regions shown in (C) are marked in (D). White arrows in (F) mark selected pinwheel centers. Framed regions in (C) and (F) are magnified.
In many mammals including cats, monkeys and ferrets, orientation preference is organized in a quasi-periodic manner, forming what are known as orientation preference maps. These maps show remarkable consistency in their geometric properties across species, particularly in the spatial organization of pinwheel centers where orientation preferences converge.&lt;/p&gt;
&lt;p&gt;However, this organization shows important &lt;strong&gt;species-specific variations&lt;/strong&gt;. Most notably, while primates and carnivores display orderly orientation maps with smooth transitions between preferred orientations, rodents lack such maps and instead show a &amp;ldquo;salt-and-pepper&amp;rdquo; arrangement where neighboring neurons have seemingly random orientation preferences. This organizational diversity raises interesting questions about the computational advantages of these different architectures and their relationship to visual processing requirements and behavioral needs across species.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="horizontal-connectivity-links-different-hypercolumns"&gt;Horizontal connectivity links different hypercolumns&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This figure shows landmark results by Bosking et al. (1997) combining orientation preference maps with retrograde tracers. After injecting tracers (white arrow), they found labeled synapses (black dots) primarily connecting neurons of similar orientation preference, leading to the influential &amp;ldquo;like-to-like&amp;rdquo; connectivity hypothesis. However, later studies by Hunt, Goodhill and others revealed significant diversity in these connection patterns across cortical regions and species, suggesting more complex connectivity rules than initially proposed. This nuanced understanding has important implications for how we think about the functional organization of horizontal connections in V1.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="contour-detection-and-the-association-field-4"&gt;Contour detection and the Association Field&lt;/h2&gt;
&lt;figure id="figure-field-et-al-1993"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/AssoFieldNoBosking.png" alt="[Field *et al*, 1993]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Field &lt;em&gt;et al&lt;/em&gt;, 1993]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="the-like-to-like-hypothesis"&gt;The like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-field-et-al-2013"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/AssoFieldBosking.png" alt="[Field *et al*, 2013]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Field &lt;em&gt;et al&lt;/em&gt;, 2013]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;The resemblance between what was shown by Bosking and the structure of the association field that we saw above is such that it is tempting to align both and state that the function of horizontal connections is to bind neurons with a selectivity to &lt;em&gt;similar orientations&lt;/em&gt;* over long distances. This &lt;strong&gt;like-to-like hypothesis&lt;/strong&gt; has been influential in understanding horizontal connectivity patterns.&lt;/p&gt;
&lt;p&gt;However, we should be cautious about overstating these relationships. While horizontal connections show some orientation specificity, recent evidence indicates the connectivity patterns are &lt;strong&gt;more complex and heterogeneous&lt;/strong&gt; than initially proposed. The functional role of this diverse connectivity remains an active area of investigation.&lt;/p&gt;
&lt;p&gt;During the &lt;strong&gt;remainder of this talk&lt;/strong&gt;, I will try to shed light on our current knowledege on horizontal connectivities.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="supplementary-the-hmax-model"&gt;Supplementary: the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2007"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.researchgate.net/profile/Thomas-Serre/publication/253467382/figure/fig1/AS:298143448092675@1448094345807/a-Organization-of-the-visual-cortex-The-diagram-is-modified-from-Gross-1998-Key.png" alt="[Serre and Poggio, 2007]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Serre and Poggio, 2007]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and a model of it&amp;hellip;(&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;CNN, the mother of all deep learning models&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="supplementary-convolutional-neural-nets-cnn"&gt;Supplementary: Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;this can be integrated in a hierarchy&amp;hellip;&lt;/li&gt;
&lt;li&gt;defining a Convolutional Neural Networks (CNN)&lt;/li&gt;
&lt;li&gt;one layer is a convolution&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="supplementary-orientation-selectivity-in-v1"&gt;Supplementary: Orientation selectivity in V1&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s zoom in, the basic ingredient is the receptive field&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="supplementary-orientation-selectivity-in-v1-1"&gt;Supplementary: Orientation selectivity in V1&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="supplementary-marrs-three-levels-of-analysis"&gt;Supplementary: Marr&amp;rsquo;s three levels of analysis&lt;/h2&gt;
&lt;p&gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" height="350"&gt; &lt;span class="fragment " &gt;
&lt;img src="https://outde.xyz/img/Rawski/Marr/7lvls.jpg" height="350"&gt;
&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cut in different levels: Marr (+ Poggio)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;arbitrary, but useful division of labor= computational / algorithm / hardware&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;here:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;anatomy&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;algorithm / model&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;function&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First: What is the anatomy of horizontal connections?&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;!--
&lt;/code&gt;&lt;/pre&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/7lvls.jpg" alt="[[Marr, 1982]](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;[Marr, 1982]&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;!--
&lt;figure id="figure-marr-1982"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="Marr, 1982" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Marr, 1982
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="challenging-the-like-to-like-hypothesis"&gt;Challenging the like-to-like hypothesis&lt;/h1&gt;
&lt;figure id="figure-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/header.png" alt="[[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="380" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Together with my colleagues Frédéric Chavane (INT) and James Rankin (University of Exeter), we published this paper in &lt;strong&gt;Brain Structure and Function&lt;/strong&gt; that reviews anatomical, functional, computational and theoretical evidence &lt;strong&gt;challenging the like-to-like hypothesis.&lt;/strong&gt; The paper evaluates whether this influential hypothesis about V1 horizontal connectivity holds up against accumulated empirical evidence. The review systematically examines multiple lines of research to reassess our understanding of these important cortical circuits.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-1"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig1A.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;This figure illustrates different hypothetical connectivity rules for horizontal connections in V1. The target neuron (large circle on left) has a specific orientation preference indicated by its color. Following the classical like-to-like hypothesis (shown in panel A), this neuron would preferentially connect to other neurons with matching orientation preference (similar colors) across multiple hypercolumns, as indicated by the vertical red arrows. The radial spread of connections spans approximately three hypercolumns, consistent with anatomical observations. Each hypercolumn contains a complete set of orientation preferences, represented by the different colored neurons.&lt;/p&gt;
&lt;p&gt;This first schematic (noted A) represents one of the like-to-like connectivity rules, where horizontal connections strictly follow orientation similarity.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-2"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig1AB.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Panel B shows a more nuanced version of the like-to-like hypothesis that we call &amp;ldquo;modulated like-to-like bias&amp;rdquo;. In this case, the target neuron still preferentially connects to neurons with similar orientation preferences, but the selectivity is less strict and extends over longer distances. The connections (shown by the gradients of red arrows) exhibit a smooth fall-off in specificity with distance, rather than the binary selectivity shown in panel A. This model better reflects the biological reality where connection specificity tends to be graded rather than absolute, and where horizontal connections can span multiple hypercolumns while maintaining some degree of orientation preference.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-3"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig1AD.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Panel C shows evidence for a different type of connectivity pattern in inhibitory interneurons - a &amp;ldquo;like-to-unlike&amp;rdquo; bias where neurons preferentially connect to others with different orientation preferences. This highlights how different cell types may follow distinct connectivity rules.&lt;/p&gt;
&lt;p&gt;Panel D illustrates a &amp;ldquo;like-to-all&amp;rdquo; connectivity pattern that has been observed in layers 4 and 6 of V1, where neurons form connections broadly across orientation preferences without strong selectivity. The arrows indicate connections to neurons of all orientations, suggesting these layers may serve different computational roles that do not require orientation-specific horizontal connectivity.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-4"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig1AE.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Panel E presents an integrative model that combines aspects of the previous hypotheses. It shows a hybrid connectivity pattern where neurons exhibit a like-to-like bias at short distances (within adjacent hypercolumns), but this orientation specificity gradually diminishes with distance, transitioning to a like-to-all pattern in more distant hypercolumns. This model better reflects recent empirical findings suggesting that horizontal connectivity rules are more complex and distance-dependent than originally proposed. The gradual fade of red arrows illustrates how connection specificity weakens over larger cortical distances.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-5"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/chavane-22/area17_lo_diff_circ_plot.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Let&amp;rsquo;s first shows some functional evidence.&lt;/p&gt;
&lt;p&gt;This video shows voltage-sensitive dye imaging (VSDI) data from cat primary visual cortex (area 17) in response to a local oriented grating stimulus. The visualization reveals two key aspects:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The broader activation pattern shown by overall fluorescence changes (gray)&lt;/li&gt;
&lt;li&gt;The more restricted orientation-selective response pattern (colored regions)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Two contours are overlaid: a red line marking the boundary of significant activation, and a white line delineating regions with statistically significant orientation selectivity. The orientation selectivity is encoded by color hue.&lt;/p&gt;
&lt;p&gt;The bottom plots quantify the spatiotemporal dynamics by showing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Left: The total activated cortical area over time&lt;/li&gt;
&lt;li&gt;Right: The extent of orientation-selective regions over time&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Together, these measurements demonstrate how orientation-selective signals propagate laterally beyond the classical feedforward input zone through horizontal connections, while maintaining some degree of feature selectivity.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-6"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig2A.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;This figure shows spatial and temporal dynamics of orientation selectivity in cat V1 analyzed from voltage-sensitive dye imaging data. Panel A displays a cortical orientation map averaged over the final 145ms of the response, where hue indicates preferred orientation and brightness shows orientation tuning strength. The dotted red line delineates the expected retinotopic boundary of feedforward input based on Albus (2004).&lt;/p&gt;
&lt;p&gt;The inset quantitatively compares the spatial extent of:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Total cortical activation (grey contour)&lt;/li&gt;
&lt;li&gt;Orientation-selective activation (black contour)&lt;/li&gt;
&lt;li&gt;Theoretical feedforward input boundary (red contour)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This data demonstrates that orientation-selective responses propagate laterally beyond the classical feedforward input zone through horizontal connections, while maintaining some degree of feature selectivity. The systematic comparison between total activation and selective activation provides direct evidence for how horizontal connectivity shapes the spatiotemporal dynamics of orientation processing in V1.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-7"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig2AB.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Panel B presents a comprehensive population analysis spanning nine hemispheres (three from area 17 marked with &amp;lsquo;o&amp;rsquo; and six from area 18 marked with &amp;lsquo;+&amp;rsquo;) examining how orientation selectivity changes with horizontal distance. The top plot shows the iso-orientation bias as a function of lateral spread distance, beginning from the initial cortical activation point. An exponential decay function (shown in black) fits this relationship. The bottom plot quantifies how the condition-wise modulation depth diminishes as the lateral propagation distance increases. Together, these results demonstrate a systematic weakening of orientation selectivity with increasing horizontal distance from the activation site.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-8"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig2AC.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Panel C displays intracellular recordings of subthreshold responses visualized as a visuotopic orientation polar map. The color hue represents preferred orientation while brightness indicates the strength of orientation tuning in the membrane potential. White contours outline regions showing statistically significant responses based on both amplitude and orientation selectivity criteria. The middle plots show averaged subthreshold responses to four different oriented stimuli (color-coded) at specific recording locations (marked by circle, triangle and square symbols), with scale bars indicating 50 ms and 1 mV. On the right, normalized orientation tuning curves are shown, computed by integrating responses within a fixed temporal window (shaded region in middle panel). The black circle marks the spontaneous activity level for the depolarizing integral measurement.&lt;/p&gt;
&lt;p&gt;These shows a direct functional evidence for a diversity of tuning profile in th horizontal connectivity.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-9"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-voges-and-lp-2012httpslaurentperrinetgithubiopublicationvoges-12"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/voges-12/featured.jpg" alt="[[Voges and LP, 2012]](https://laurentperrinet.github.io/publication/voges-12/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/voges-12/" target="_blank" rel="noopener"&gt;[Voges and LP, 2012]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
To quantitatively understand how connectivity patterns shape network dynamics, we previously showed in simulated neural networks that transitioning from local unspecific to local specific and long-range patchy connectivities can fundamentally alter emergent activity patterns [Voges &amp;amp; LP, 2012]. This highlights how the detailed organization of horizontal connections plays a crucial role in shaping the dynamics of recurrent neural circuits. We will examine this computational aspect further in our review of the evidence challenging strict like-to-like connectivity rules.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-10"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig4ABC.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Figure 4 illustrates a neural field model that bridges anatomical structure with functional observations in V1, as developed by Rankin and Chavane (2017).&lt;/p&gt;
&lt;p&gt;Panel A depicts radial connectivity profiles with Gaussian-decaying inhibition and distance-dependent excitation that peaks periodically at multiples of distance L. The Ring Width (RW) parameter controls the spread of these excitatory peaks.&lt;/p&gt;
&lt;p&gt;Panel B shows how local orientation preference maps influence lateral connectivity patterns under different orientation bias (BR) values in the recurrent connections.&lt;/p&gt;
&lt;p&gt;Panel C quantifies the orientation tuning that emerges from these connectivity patterns. While orientations are uniformly represented globally, the local excitatory component shows strong bias around -60°. As BR increases above 0.5, the lateral connection orientation bias strengthens, reaching values around k=1 (consistent with Buzás et al. 2006).&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-11"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig4ABCDE.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Panel D presents a simulation snapshot at 600ms demonstrating two key activity components: orientation-selective responses (within white contour) confined to the feedforward footprint (FFF, red), and broader non-orientation-specific activity (grey contour) extending beyond.&lt;/p&gt;
&lt;p&gt;Panel E tracks the temporal evolution of both the non-orientation-specific and orientation-selective response areas.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-12"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig4.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Panel F maps the normalized selective area (relative to the feedforward footprint) across Ring Width (RW) and orientation bias (BR) parameters. White contours delineate anatomically plausible ranges where k values fall between 0.7-1.2, consistent with experimental measurements. The green region indicates parameter combinations that additionally satisfy constraints on both orientation preference and the observed radial decay of selectivity.&lt;/p&gt;
&lt;p&gt;The neural field model effectively connects anatomical connectivity patterns with functional observations of orientation selectivity propagation in V1. The resulting connectivity structure exhibits similarities with &amp;ldquo;association field&amp;rdquo; patterns, suggesting potential optimization for encoding natural image statistics. This framework provides a quantitative basis for investigating computational principles underlying horizontal connectivity in visual cortex.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-13"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig5A.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;This figure illustrates the groundbreaking approach developed by Geisler et al. (2001) for analyzing edge statistics in natural images. The method involves:&lt;/p&gt;
&lt;p&gt;This landmark work systematically analyzed the occurrence of edge pairs in natural images through:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Edge detection using orientation-selective filters (red segments)&lt;/li&gt;
&lt;li&gt;Measuring geometric relationships between edge pairs:
&lt;ul&gt;
&lt;li&gt;Relative orientation difference (𝜃)&lt;/li&gt;
&lt;li&gt;Relative position angle (𝜙)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The analysis revealed robust statistical regularities:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A predominance of parallel edge arrangements&lt;/li&gt;
&lt;li&gt;A strong bias for co-circular edge configurations&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="modelling-the-association-field"&gt;Modelling the Association field&lt;/h1&gt;
&lt;figure id="figure-field-et-al-2013"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/talk/bosking2Asso.png" alt="[Field *et al*, 2013]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Field &lt;em&gt;et al&lt;/em&gt;, 2013]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Understanding how these image statistics relate to cortical connectivity patterns provides key insights into the computational principles underlying horizontal connections in V1.
&lt;/aside&gt;
&lt;!--
---
## Edge co-occurences in natural images
&lt;figure id="figure-edge-co-occurrences-can-account-for-rapid-categorization-of-natural-versus-animal-images-lp-and-bednar-2015httpslaurentperrinetgithubiopublicationperrinet-bednar-15"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-bednar-15/featured.jpg" alt="Edge co-occurrences can account for rapid categorization of natural versus animal images [[LP and Bednar, 2015]](https://laurentperrinet.github.io/publication/perrinet-bednar-15/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Edge co-occurrences can account for rapid categorization of natural versus animal images &lt;a href="https://laurentperrinet.github.io/publication/perrinet-bednar-15/" target="_blank" rel="noopener"&gt;[LP and Bednar, 2015]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Panel A shows a sample image overlaid with detected edges represented as red line segments. Each segment encodes position (center point), orientation, and scale (segment length). The edge detection was controlled to ensure the reconstruction error remained below 5% of the original image energy.&lt;/p&gt;
&lt;p&gt;Panel B illustrates the geometric relationships between edge pairs. For any reference edge A and target edge B, these relationships are quantified by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Orientation difference (θ)&lt;/li&gt;
&lt;li&gt;Scale ratio (σ)&lt;/li&gt;
&lt;li&gt;Center-to-center distance (d)&lt;/li&gt;
&lt;li&gt;Azimuth difference (φ)&lt;/li&gt;
&lt;li&gt;Co-circularity parameter ψ = φ - θ/2&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Following Geisler et al. (2001), edges outside a central circular mask were excluded to prevent boundary artifacts in the statistical analysis.&lt;/p&gt;
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h2 id="edge-co-occurences-in-natural-images"&gt;Edge co-occurences in natural images&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig5A.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Panel A illustrates the groundbreaking approach developed by Geisler et al. (2001) for analyzing edge statistics in natural images. The method involves:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Detecting oriented edge elements in natural images (shown as red segments)&lt;/li&gt;
&lt;li&gt;For each edge pair, measuring:
&lt;ul&gt;
&lt;li&gt;Their relative orientation difference (𝜃)&lt;/li&gt;
&lt;li&gt;The relative position angle (𝜙)&lt;/li&gt;
&lt;li&gt;Center-to-center distance (d)&lt;/li&gt;
&lt;li&gt;Azimuth difference (φ)&lt;/li&gt;
&lt;li&gt;Co-circularity parameter ψ = φ - θ/2&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This quantitative analysis reveals two key distributions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A strong bias for parallel edge arrangements, evident in the orientation difference histogram&lt;/li&gt;
&lt;li&gt;A marked preference for co-circular alignments, shown in the relative position histogram&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These statistics vary significantly across image databases. For example, images containing animals exhibit enhanced co-circularity compared to general natural scenes. This suggests that rather than implementing a single fixed association field, the visual system may need to handle diverse statistical regularities present in natural inputs.&lt;/p&gt;
&lt;p&gt;The next section will examine how these statistical regularities inform computational models of the association field.&lt;/p&gt;
&lt;/aside&gt;
&lt;!--
---
## Sparse representations in computer vision
&lt;figure id="figure-lp-and-bednar-2015httpslaurentperrinetgithubiopublicationperrinet-bednar-15"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/figures/figure_synthesis.svg" alt="[[LP and Bednar, 2015]](https://laurentperrinet.github.io/publication/perrinet-bednar-15/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/perrinet-bednar-15/" target="_blank" rel="noopener"&gt;[LP and Bednar, 2015]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
chevrons
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h2 id="edge-co-occurences-in-natural-images-1"&gt;Edge co-occurences in natural images&lt;/h2&gt;
&lt;p&gt;&lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/talk/Geisler01Fig3A.png" height="275"&gt;&lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/talk/Geisler01Fig3B.png" height="275"&gt; &lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/talk/Geisler01Fig3C.png" height="275"&gt;
[Geisler, 2001]&lt;/p&gt;
&lt;aside class="notes"&gt;
Our analysis reproduced the key findings from Geisler et al. (2001) regarding edge co-occurrence statistics in natural images. Importantly, we observed that these co-occurrence patterns remain invariant with respect to distance, as this parameter depends primarily on viewpoint rather than intrinsic scene structure. Similarly, the statistics show rotational invariance with respect to the reference edge orientation. By leveraging these symmetries and marginalizing over distance and orientation, we were able to reduce the full 4-dimensional co-occurrence distribution to an informationally equivalent 2-dimensional representation of relative orientation difference and Co-circularity parameter ψ = φ - θ/2 where φ Azimuth difference.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="edge-co-occurences-in-natural-images-2"&gt;Edge co-occurences in natural images&lt;/h2&gt;
&lt;figure id="figure-edge-co-occurrences-can-account-for-rapid-categorization-of-natural-versus-animal-images-lp-and-bednar-2015httpslaurentperrinetgithubiopublicationperrinet-bednar-15"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-bednar-15/figure_chevrons.png" alt="Edge co-occurrences can account for rapid categorization of natural versus animal images [[LP and Bednar, 2015]](https://laurentperrinet.github.io/publication/perrinet-bednar-15/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Edge co-occurrences can account for rapid categorization of natural versus animal images &lt;a href="https://laurentperrinet.github.io/publication/perrinet-bednar-15/" target="_blank" rel="noopener"&gt;[LP and Bednar, 2015]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The probability distribution function p(ψ,θ) represents the distribution of the different geometrical arrangements of edges’ angles, which we call a “chevron map”. We show here the histogram for non-animal natural images, illustrating the preference for co-linear edge configurations. For each chevron configuration, deeper and deeper red circles indicate configurations that are more and more likely with respect to a uniform prior, with an average maximum of about 3 times more likely, and deeper and deeper blue circles indicate configurations less likely than a flat prior (with a minimum of about 0.8 times as likely). Conveniently, this “chevron map” shows in one graph that non-animal natural images have on average a preference for co-linear and parallel edges, (the horizontal middle axis) and orthogonal angles (the top and bottom rows), along with a slight preference for co-circular configurations (for ψ =0 and ψ = ± π/2, just above and below the central row).
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="edge-co-occurences-in-natural-images-3"&gt;Edge co-occurences in natural images&lt;/h2&gt;
&lt;figure id="figure-edge-co-occurrences-can-account-for-rapid-categorization-of-natural-versus-animal-images-lp-and-bednar-2015httpslaurentperrinetgithubiopublicationperrinet-bednar-15"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-bednar-15/figure_chevrons2.png" alt="Edge co-occurrences can account for rapid categorization of natural versus animal images [[LP and Bednar, 2015]](https://laurentperrinet.github.io/publication/perrinet-bednar-15/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Edge co-occurrences can account for rapid categorization of natural versus animal images &lt;a href="https://laurentperrinet.github.io/publication/perrinet-bednar-15/" target="_blank" rel="noopener"&gt;[LP and Bednar, 2015]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The chevron maps reveal distinct edge configuration biases across image categories. Animal images show relatively more circular continuations and converging angles compared to non-animal images (red regions in central vertical axis), while having fewer co-linear, parallel and orthogonal arrangements (blue regions along horizontal axis). In contrast, man-made images exhibit a strong bias for co-linear features (intense red at center). This suggests the visual system must adapt to diverse statistical regularities rather than implementing a fixed association field pattern, as different image categories contain systematically different geometric arrangements of edges.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="edge-co-occurences-in-natural-images-4"&gt;Edge co-occurences in natural images&lt;/h2&gt;
&lt;figure id="figure-edge-co-occurrences-can-account-for-rapid-categorization-of-natural-versus-animal-images-lp-and-bednar-2015httpslaurentperrinetgithubiopublicationperrinet-bednar-15"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-bednar-15/figure_results.png" alt="Edge co-occurrences can account for rapid categorization of natural versus animal images [[LP and Bednar, 2015]](https://laurentperrinet.github.io/publication/perrinet-bednar-15/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Edge co-occurrences can account for rapid categorization of natural versus animal images &lt;a href="https://laurentperrinet.github.io/publication/perrinet-bednar-15/" target="_blank" rel="noopener"&gt;[LP and Bednar, 2015]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;This figure shows classification performance across image categories using different statistical features. We used an SVM classifier with three feature sets: first-order orientation statistics (FO), the reduced 2D &amp;ldquo;chevron map&amp;rdquo; (CM), and full 4D second-order statistics (SO). The classification accuracy (F1 score) was tested for distinguishing between image categories. Results show strong performance in separating man-made from natural images, as expected. More notably, the classifier achieved ~80% accuracy in discriminating animal vs non-animal natural images, matching human performance levels reported by Serre et al. This suggests that relatively simple edge co-occurrence statistics contain sufficient information for basic image categorization tasks, without requiring higher-level semantic processing.&lt;/p&gt;
&lt;p&gt;We also found that our model made the same errors as humans do: if an image without an animal contains more co-circular edges, it is more likely to be falsely categorized as containing an animal.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="edge-co-occurences-in-natural-images-5"&gt;Edge co-occurences in natural images&lt;/h2&gt;
&lt;figure id="figure-edge-co-occurrences-can-account-for-rapid-categorization-of-natural-versus-animal-images-lp-and-bednar-2015httpslaurentperrinetgithubiopublicationperrinet-bednar-15"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-bednar-15/figure_chevrons.png" alt="Edge co-occurrences can account for rapid categorization of natural versus animal images [[LP and Bednar, 2015]](https://laurentperrinet.github.io/publication/perrinet-bednar-15/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Edge co-occurrences can account for rapid categorization of natural versus animal images &lt;a href="https://laurentperrinet.github.io/publication/perrinet-bednar-15/" target="_blank" rel="noopener"&gt;[LP and Bednar, 2015]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;While we demonstrated how association fields emerge from edge statistics, the resulting probability distribution represents an average across many possible configurations. Though this statistical approach successfully discriminates between image categories like animal vs non-animal images, it likely oversimplifies the true diversity of edge arrangements in natural scenes.&lt;/p&gt;
&lt;p&gt;Individual images contain unique geometrical patterns that can deviate significantly from these average statistics - for example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Smooth contours&lt;/li&gt;
&lt;li&gt;Edge occlusions&lt;/li&gt;
&lt;li&gt;Complex textures&lt;/li&gt;
&lt;li&gt;Fractal-like patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Understanding this variability, rather than just mean tendencies, could provide deeper insights into how horizontal connectivity patterns may adapt to handle the rich complexity of natural scenes.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="can-we-explain-the-diversity-"&gt;Can we explain the diversity ?&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Indeed, this diversity is revealed in the anatomical data: V1 horizontal connectivity exhibits more complexity than suggested by the classical like-to-like hypothesis. While orientation-specific connections exist, they coexist with non-selective connections that link neurons irrespective of their tuning preferences. This diversity likely serves multiple computational functions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Specific connections could support contour integration and feature binding&lt;/li&gt;
&lt;li&gt;Non-selective connections may enable broad contextual modulation&lt;/li&gt;
&lt;li&gt;Mixed connectivity patterns could help maintain network stability while preserving functional specificity&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This anatomical heterogeneity aligns with V1&amp;rsquo;s role in both specialized feature detection and broader contextual processing. Understanding how these distinct connectivity patterns interact remains an active area of research in visual neuroscience.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="predictive-processing"&gt;Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To understand the diversity in horizontal connectivity patterns, we developed a biologically plausible hierarchical model based on &lt;strong&gt;Convolutional Neural Networks (CNNs) backbone&lt;/strong&gt;. The model processes natural images through multiple convolutional layers organized in a hierarchical structure:.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Natural image as input&lt;/li&gt;
&lt;li&gt;Local receptive fields via convolution operations&lt;/li&gt;
&lt;li&gt;Hierarchical processing through multiple layers&lt;/li&gt;
&lt;/ol&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="predictive-processing-1"&gt;Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To bridge the gap between anatomical observations and functional requirements of visual processing, We added two key ingredients in the sparse deep predictive coding (SDPC) model :&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Sparse&lt;/strong&gt; connectivity patterns:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enforcing regularization of the activity map using L1 penalty&lt;/li&gt;
&lt;li&gt;Activity computed via recurrent local connectivity&lt;/li&gt;
&lt;li&gt;Similar to biological observations&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Feedback&lt;/strong&gt; from efferent layers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Predicts activity of afferent layer&lt;/li&gt;
&lt;li&gt;Only residual prediction error is processed&lt;/li&gt;
&lt;li&gt;Defines long-range inter-areal connectivity&lt;/li&gt;
&lt;li&gt;Specific influence demonstrated in Neural Computation paper&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By defining a &lt;strong&gt;cost on minimizing the prediction error&lt;/strong&gt; in each layer, everything stays derivable, such that we can use a classical gradient descent. These additions should allow us to better understand how feedback shapes visual processing in biological neural networks.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="predictive-processing-2"&gt;Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Our key findings reveal highly interpretable receptive fields:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;First layer filters exhibit classical orientation-selective filters&lt;/li&gt;
&lt;li&gt;When trained on face datasets, specialized feature detectors emerge içn the second layer for:
&lt;ul&gt;
&lt;li&gt;Eyes&lt;/li&gt;
&lt;li&gt;Ears&lt;/li&gt;
&lt;li&gt;Mouths&lt;/li&gt;
&lt;li&gt;Smooth contours&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These results suggest that predictive processing frameworks may offer better &lt;strong&gt;interpretability&lt;/strong&gt; compared to classical deep learning architectures.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="predictive-processing-3"&gt;Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2020-09-25_IRPHE/raw/master/figures/PCOMPBIOL-D-19-01811_R2_compressed_FigS4.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;More specifically in the context of our focus today, we can look at the co-occurence&lt;/p&gt;
&lt;p&gt;llustration of the procedure to generate interaction map. In this
illustrative example we consider a V1 representation with only 4 feature maps
(represented in the upper-left box). Step 1 is to extract a neighborhood (of size 3x3 in
the illustration only) around the most strongly activated neuron (represented with a red
square in the illustration) for a given central preferred orientation (denoted ✓ c ). Step 2
is to normalize the neural activity in the extracted neighborhood using the marginal
activity (see Eq.8). Step 3 is to compute the resulting orientation and activity at every
position of the neighborhood using a circular mean (see Eq. 11 and Eq. 12 respectively).
To keep a concise figure we have illustrated the computation of the central edge of the
interaction map only. For simplification, the illustration shows only 1 neighborhood
extraction whereas the interaction maps shown in the paper are computed by averaging
neighborhoods centered on the 10 most strongly activated neurons&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="predictive-processing-4"&gt;Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/boutin-franciosini-chavane-ruffier-perrinet-20Fig3.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
What is more relevant is to study the interaction patterns between neurons from the first layer.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="predictive-processing-5"&gt;Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/boutin-franciosini-chavane-ruffier-perrinet-20Fig4.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
We can further analyze the relative role fo feedback: Relative co-linearity and co-circularity of the V1 interaction map w.r.t. to feedback . (A) In the end-zone. (B) In the side-zone. For each plot, the left and right block of bars represents the relative co-linearity and co-circularity their respective value without feedback (see Eq. 23 and Eq. 24). Bars’ heights represent the median over all the orientations, and error bars are computed as the median absolute deviation.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="predictive-processing-with-pooling"&gt;Predictive processing with pooling&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
It is worth noting that extending the model with additional architectural features, such as long-range horizontal connectivity across neighboring hypercolumns, enables the emergence of more complex properties including topographic maps and complex cell-like responses. However, examining these extensions falls beyond the scope of today&amp;rsquo;s presentation.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-14"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
As a result, predictive processing may be an efficient model to better understand the richness of horizontal connectivity patterns.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="challenging-the-like-to-like-hypothesis-15"&gt;Challenging the like-to-like hypothesis&lt;/h2&gt;
&lt;figure id="figure-revisiting-horizontal-connectivity-rules-in-v1-from-like-to-like-towards-like-to-all-chavane-lp-and-rankin-2022httpslaurentperrinetgithubiopublicationchavane-22"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/chavane-22/Chavane2022fig1AE.jpg" alt="Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All [[Chavane, LP and Rankin, 2022]](https://laurentperrinet.github.io/publication/chavane-22/)" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Revisiting Horizontal Connectivity Rules in V1: From like-to-like towards like-to-All &lt;a href="https://laurentperrinet.github.io/publication/chavane-22/" target="_blank" rel="noopener"&gt;[Chavane, LP and Rankin, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To conclude, our review of horizontal connectivity in V1 reveals patterns more complex than initially theorized. The classical like-to-like hypothesis, while valuable, doesn&amp;rsquo;t fully capture the &lt;strong&gt;diversity&lt;/strong&gt; of observed connectivity patterns.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mathematical modeling&lt;/strong&gt; has proven essential in bridging theory and biology. Our predictive processing framework shows how simple computational principles can explain the emergence of these complex connectivity patterns. The model demonstrates how feedback influences lateral interactions and reproduces key experimental observations.&lt;/p&gt;
&lt;p&gt;However, &lt;strong&gt;important questions remain unanswered&lt;/strong&gt;. We need to better understand how precise timing information is encoded in these circuits, how temporal dynamics shape processing, and whether similar principles apply across other cortical areas.&lt;/p&gt;
&lt;p&gt;These fundamental questions will guide future experimental and theoretical work as we continue to unravel the computational principles of cortical processing.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;u&gt;
[2025-02-11] When Cortical Neurons Talk Sideways: Beyond Feedforward Visual Processing
&lt;/u&gt;&lt;/h2&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;!-- &lt;a href="https://laurentperrinet.github.io/grant/anr-anr"&gt; --&gt;
&lt;img src="https://laurentperrinet.github.io/grant/polychronies/featured.png" alt="header" height="300"&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/post/2019-06-22_ardemone/featured.png" alt="header" height="300"&gt;
&lt;/a&gt;--&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2025-02-11-neuromath/?transition=fade"&gt; &lt;i&gt; Laurent Perrinet &lt;/i&gt; &lt;/a&gt; - &lt;a href="https://laurentperrinet.github.io"&gt;https://laurentperrinet.github.io&lt;/a&gt;
&lt;br&gt;
Séminaire Neuromathématiques, &lt;b&gt;Collège de France&lt;/b&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/qrcode.png" alt="QR code" height="80" width="80"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
Thanks for your attention, I would be happy to take your questions.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="dynamics-of-vision-1"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;another important missing feature: time&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-2"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-3"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-4"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston, 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-5"&gt;Dynamics of vision&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-6"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-7"&gt;Dynamics of vision&lt;/h2&gt;
&lt;!--
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Flash-lag effect: MBP (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;)&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="dynamics-of-vision-8"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable height="420" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="dynamics-of-vision-neural-modeling"&gt;Dynamics of vision: Neural modeling&lt;/h1&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/talk/figure_series.png" height="420"&gt;
&lt;/span&gt;&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/talk/figure_series_11.png" height="420"&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;topography?&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks-spiking-motifs"&gt;Spiking Neural Networks: Spiking motifs&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;These observations have led us to &lt;em&gt;review&lt;/em&gt; neurobiological evidence around the existence of a neural representation that would use the relative time of spikes as a means of representing information. In particular, it is possible to use the conduction &lt;em&gt;delays&lt;/em&gt; that exist in the transmission of spikes from one neuron to another. It may seem paradoxical, but these delays are not simply a constraint, but can help to improve our ability to represent information by way of &lt;em&gt;spiking motifs&lt;/em&gt;.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-1"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If we consider, for example, this ultra-simplified network consisting of three presynaptic neurons and two output neurons connected by &lt;em&gt;heterogeneous&lt;/em&gt; delays, then we can see that a &lt;em&gt;synchronous&lt;/em&gt; input will generate membrane activity in the two output neurons at different times, so the threshold will never be reached, and these neurons will not produce an output impulse. On the other hand, if these delays are such that the action potentials converge on the neuron at the same instant, then these contributions will be able to sum up at the &lt;em&gt;same instant&lt;/em&gt; and produce an output spike, as denoted here by the red bar.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-2"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To better understand this mechanism, let&amp;rsquo;s return to our animation of a spiking neuron. Action potentials arrive at the neuron and are &lt;em&gt;immediately&lt;/em&gt; transmitted to the neuron&amp;rsquo;s cell body to be integrated and potentially generate a spike.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-3"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When using &lt;em&gt;heterogeneous&lt;/em&gt; delays, the situation is different, as the information will take a differential time to arrive or not at the neuron&amp;rsquo;s cell body. Note that if we include a particular &lt;em&gt;spiking motif&lt;/em&gt;, which we have here highlighted by green action potentials, then these converge at the same instant thanks to the delay. We will therefore have a detection in the neuron in the form of a new impulse.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
We used this theoretical principle in an algorithm for detecting movement in an image. To do this, we first generated event data using natural images that are set in motion along trajectories that resemble those produced by free exploration of the visual scene. You&amp;rsquo;ll notice several features of the event-driven output, such as the fact that faster motion generates more spikes, or that edges oriented parallel to one direction produce few changes, and therefore little spike output - the so-called aperture problem.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-1"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://raw.githubusercontent.com/laurentperrinet/figures/7f382a8074552de1a6a0c5728c60d48788b5a9f8/animated_neurons/conv_HDSNN.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We then used a neural network with a classical architecture, which we enhanced by using an impulse representation that takes into account different possible synaptic delays. In this figure, we have represented the input in the left grid, which represents the occurrence of spikes of positive or negative polarity. Then we have represented different processing channels denoted by the colors green and orange, which are applied to this input to produce membrane activity. As illustrated above, this activity will produce output pulses, notably in synaptic connection nuclei, with heterogeneous delays corresponding to the detection of precise spatio-temporal patterns.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-2"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One advantage of this network is that it is differentiable, enabling us to apply classical machine learning methods, notably supervised learning. We then see the emergence of different convolution kernels, and here I represent a subset of its kernels for different directions, as denoted by the red arrows on the left of the graph. It shows the kernels obtained on the spatial representation according to the different columns, and each row represents the different delays from a delay of one on the right to a delay of 12 time steps on the left. Detectors that follow the motion emerge. For example, for the top line from top to bottom. These kernels integrate both positive neurons in red and negative polarity inputs in blue.
Such spatio-temporal filtering is observed in neurobiology, but to my knowledge had never been observed in a model of spiking neurons trained under natural conditions.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-3"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_raw.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We will now study the performance of this network in detecting motion in the flow of events entering the network. When we use all the weights of the convolution kernel, we get a very good performance of the order of 99%, represented by the black dot in the top right-hand corner. Note that in the kernels we&amp;rsquo;ve seen emerge, most of the synaptic weights are close to zero, so we might consider removing some of these weights, as this can be shown to reduce the number of event calculations required.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-4"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_shortening.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
This is what we&amp;rsquo;ve done, by first removing the parts of the core corresponding to the longest delays. This &amp;ldquo;shortens&amp;rdquo; the kernel. We quickly observed a degradation in performance, which reached half-saturation when we reduced the number of weights by around 50%. This demonstrates the importance of integrating information that is quite distant and structured over time.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-5"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In a second step, we performed a pruning operation, which consists in progressively removing the weights that are the weakest. This time, performance remains optimal over a wide compression range, and we reach half-saturation when we have removed around 99.8% of the weights. This means that the network is able to maintain very good performance, even when only one weight out of 600 has been kept, and therefore, with a computation time increased by a factor of 600. This property, which we didn&amp;rsquo;t expect, seems promising for creating machine learning algorithms that are less energy-hungry.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2024-11-18-journee-biomometisme</title><link>https://laurentperrinet.github.io/slides/2024-11-18-journee-biomometisme/</link><pubDate>Mon, 18 Nov 2024 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2024-11-18-journee-biomometisme/</guid><description>&lt;section&gt;
&lt;h2&gt;&lt;u&gt;
[2024-11-18] NeuroAI: interactions multiples entre Neurosciences et Intelligence artificielle
&lt;/u&gt;&lt;/h2&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;a href="https://laurentperrinet.github.io/grant/anr-anr"&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/header.png" alt="header" height="300"&gt;
&lt;/a&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2024-11-18-journee-biomometisme/?transition=fade"&gt; &lt;i&gt; Laurent Perrinet &lt;/i&gt; &lt;/a&gt; - &lt;a href="https://laurentperrinet.github.io"&gt;https://laurentperrinet.github.io&lt;/a&gt;
&lt;br&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/featured.png" alt="ANR" height="80" width="80"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;outline =&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bonjour. Je suis Laurent Perrinet, directeur de recherche CNRS en neurosciences computationnelles à l&amp;rsquo;Institut de Neurosciences de la Timone à Marseille. Je vous remercie pour cette invitation à participer à cette Journée Scientifique &amp;ldquo;Biomimove 2024 : Action, Perception et Traitement&amp;rdquo; à la croisée entre robotique et science du vivant.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Les neurosciences computationnelles sont les sciences qui essaient d’extraire de nos connaissances en neurosciences biologiques des principes computationnels, comme le neurone formel et sa capacité d’apprentissage, qui est la brique de base des réseaux de neurones. Ces derniers ont conduit à la révolution de l’IA avec les réseaux profonds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Je suis convaincu que nous sommes au tournant d&amp;rsquo;une nouvelle ère dans le développement des systèmes embarqués, où l&amp;rsquo;intelligence artificielle a le potentiel de créer des innovations disruptives à la hauteur des performances de l’intelligence naturelle et pour lesquelles il est essentiel de s&amp;rsquo;inspirer des neurosciences biologiques.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;C&amp;rsquo;est pourquoi je suis très heureux de vous présenter en premier lieu le projet ANR AgileNeuRobot, un projet de recherche interdisciplinaire visant à développer des robots aériens agiles bio-mimétiques pour le vol en conditions réelles.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Dans cette optique, afin de caractériser certains enjeux de l&amp;rsquo;IA embarquée, notamment dans le domaine du spatial, je vais vous présenter deux leviers s&amp;rsquo;inspirant de la biologie et illustrant comment les neurosciences peuvent faire avancer le domaine de façon radicale. L&amp;rsquo;intégration de connaissances biomimétiques dans les engins embarqués peut améliorer leur résilience et leur adaptabilité face aux environnements hostiles, tout en réduisant la consommation d&amp;rsquo;énergie. Je serais ravi d&amp;rsquo;engager ensuite une discussion avec vous sur ces sujets et d&amp;rsquo;échanger sur vos propres expériences et perspectives.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="agileneurobot-fiche-didentité"&gt;AgileNeuRobot: Fiche d&amp;rsquo;identité&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Titre : Robots aériens agiles bio-mimetiques pour le vol en conditions réelles&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Title : Bio-mimetic agile aerial robots flying in real-life conditions&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;CES : CE23 - Intelligence Artificielle (ANR-20-CE23-0021)&lt;/li&gt;
&lt;li&gt;Durée: 4 ans, du 1er Octobre 2021 au 30 Septembre 2025&lt;/li&gt;
&lt;li&gt;Budget total: 435 k€&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
Le projet ANR AgileNeuRobot est donc un projet interdisciplinaire financé par l&amp;rsquo;Agence Nationale de la Recherche (ANR) dans le cadre de l&amp;rsquo;appel à projets « Intelligence Artificielle » (ANR-20-CE23-0021). Il vise à développer des robots aériens agiles bio-mimétiques pour le vol en conditions réelles sur une période de 4 ans, du 1er octobre 2021 au 30 septembre 2025. Il est financé à hauteur de 435 k€ et représente un exemple convaincant de l&amp;rsquo;impact potentiel des neurosciences computationnelles sur les systèmes embarqués dans le domaine des robots aériens autonomes.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="agileneurobot-consortium"&gt;AgileNeuRobot: Consortium:&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/stéphane-viollet/avatar.jpg" alt="SV" height="150"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/ryad-benosman/avatar.jpg" alt="RB" height="150"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/laurent-u-perrinet/avatar.png" alt="LP" height="150"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Stéphane Viollet&lt;/td&gt;
&lt;td&gt;Ryad Benosman&lt;/td&gt;
&lt;td&gt;Laurent Perrinet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Julien Diperi&lt;/td&gt;
&lt;td&gt;Sio-Hoï Ieng&lt;/td&gt;
&lt;td&gt;Emmanuel Daucé&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Post-doc 1&lt;/td&gt;
&lt;td&gt;Post-doc 2&lt;/td&gt;
&lt;td&gt;PhD (&lt;a href="https://laurentperrinet.github.io/author/jean-nicolas-j%C3%A9r%C3%A9mie/" target="_blank" rel="noopener"&gt;JN Jérémie&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Inst Sciences Mouvement&lt;/td&gt;
&lt;td&gt;Inst de la Vision&lt;/td&gt;
&lt;td&gt;Inst Neurosci de la Timone&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Le projet AgileNeuRobot est un projet que je coordonne en collaboration avec plusieurs institutions :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;l&amp;rsquo;Inst Sciences du Mouvement pour la partie robotique bio-inspirée,&lt;/li&gt;
&lt;li&gt;l&amp;rsquo;Institut de la Vision pour le développement de nouveaux capteurs.&lt;/li&gt;
&lt;li&gt;l&amp;rsquo;Institut de Neurosciences de la Timone (Aix-Marseille Université) pour l’aspect théorique et l&amp;rsquo;intégration de ces disciplines.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ensemble, nous travaillons à la fois sur les aspects techniques et scientifiques pour créer ces robots aériens. Ce projet a pour but de contribuer non seulement au développement de nouvelles technologies, mais aussi à la compréhension et à l&amp;rsquo;élaboration de nouveaux modèles théoriques pour expliquer les mécanismes naturels sous-jacents aux capacités d&amp;rsquo;adaptabilité et d&amp;rsquo;apprentissage des systèmes biologiques.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="agileneurobot-agile--performant-et-efficace"&gt;AgileNeuRobot: Agile = Performant et efficace&lt;/h2&gt;
&lt;figure id="figure-the-system-includes-3-units-to-process-event-driven-visual-inputs-communicating-by-feed-forward-and-feed-back-paths"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/principe_agile.jpg" alt="The system includes 3 units to process event-driven visual inputs communicating by feed-forward and feed-back paths." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
The system includes 3 units to process event-driven visual inputs communicating by feed-forward and feed-back paths.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Le système en développement est un exemple de robotique bio-inspirée. Il est conçu pour être capable de traiter des données visuelles en temps réel et de réagir rapidement aux changements de l&amp;rsquo;environnement, notamment pour éviter ou intercepter des objets en vol.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;performance : garder une bonne acuité tout en répondant rapidement et presque immédiatement.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;efficacité : des besoins réduits en énergie pour un fonctionnement autonome.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pour cela, nous avons utilisé une architecture inspirée des insectes qui combine des capteurs événementiels avec des réseaux de neurones impulsionnels pour créer un système agile et performant, que je vais décrire dans la suite de l’exposé.&lt;/p&gt;
&lt;p&gt;Mais d&amp;rsquo;abord, je voudrais souligner deux contraintes majeures de ce type de systèmes embarqués :&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="enjeux-de-lia-embarquée--latence-de-réponse"&gt;Enjeux de l&amp;rsquo;IA embarquée : latence de réponse&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-grimaldi-et-al-2022httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies [[Grimaldi *et al*, 2022]](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Tout d’abord, les systèmes sensoriels biologiques sont composés de séquences de traitement qui possèdent des délais de traitement. Je décris ici la chaîne de traitement d’une image visuelle, ici pour un enfant jouant à un jeu video et devant cliquer sur le bon bouton, et qui illustre les différentes latences du traitement de l’information de la vision à l’action.&lt;/p&gt;
&lt;p&gt;Si les délais dans un système embarqué sont plus rapides, il reste que les informations dans les différentes étapes de traitement peuvent être décalées et nécessitent un traitement adapté afin de répondre de la façon la plus immédiate possible. Je pense notamment à la détection d&amp;rsquo;objets en mouvement très rapide dans le cadre d&amp;rsquo;un robot en mouvement.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="enjeux-de-lia-embarquée--budget-énergétique"&gt;Enjeux de l&amp;rsquo;IA embarquée : budget énergétique&lt;/h2&gt;
&lt;figure id="figure-prototype-avec-caméra-événementielle-et-calculateur"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/prototype.jpg" alt="Prototype avec caméra événementielle et calculateur." loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Prototype avec caméra événementielle et calculateur.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Deuxième contrainte liée à la première : la consommation énergétique.&lt;/p&gt;
&lt;p&gt;Je vous présente ici une photo de notre premier prototype qui inclut, en plus des équipements classiques d&amp;rsquo;un robot aérien (capteurs de hauteur, accéléromètres, calculateur de navigation), différentes caméras ainsi qu’un calculateur dédié.&lt;/p&gt;
&lt;p&gt;Il faut comprendre que ces équipements additionnels consomment une énergie non négligeable. Cela implique de dimensionner correctement la batterie, ce qui a pour effet d&amp;rsquo;augmenter les besoins énergétiques pour le vol lui-même.&lt;/p&gt;
&lt;p&gt;Je vais proposer deux leviers, inspirés de la biologie, pour faire avancer le domaine de façon radicale (pas juste gagner 30 %), mais pour passer à une autre échelle.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Nouvelles caméras : basées sur la même technologie qu’un CMOS, mais au lieu de récolter à intervalles réguliers l’ensemble des valeurs de luminance sur tous les pixels, chaque pixel est indépendant.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;le mode de représentation de l&amp;rsquo;information est différent : le signal consiste à émettre un événement si et seulement si un changement a été observé par ce pixel, ce qui est représenté ici par ces flux d’événements.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-1"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sensor&lt;/th&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Framerate&lt;/th&gt;
&lt;th&gt;Resolution&lt;/th&gt;
&lt;th&gt;Power&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Human eye&lt;/td&gt;
&lt;td&gt;60 (?) dB&lt;/td&gt;
&lt;td&gt;300 (?) fps&lt;/td&gt;
&lt;td&gt;100 (?) Mpx&lt;/td&gt;
&lt;td&gt;10 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DSLR&lt;/td&gt;
&lt;td&gt;44.6 dB&lt;/td&gt;
&lt;td&gt;120 fps&lt;/td&gt;
&lt;td&gt;2&amp;ndash;20 Mpx&lt;/td&gt;
&lt;td&gt;30 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ultra-high speed&lt;/td&gt;
&lt;td&gt;64 dB&lt;/td&gt;
&lt;td&gt;10^4 fps&lt;/td&gt;
&lt;td&gt;0.3&amp;ndash;4 Mpx&lt;/td&gt;
&lt;td&gt;300 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event-based&lt;/td&gt;
&lt;td&gt;120 dB&lt;/td&gt;
&lt;td&gt;10^6 fps&lt;/td&gt;
&lt;td&gt;0.1&amp;ndash;2 Mpx&lt;/td&gt;
&lt;td&gt;30 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Les caméras événementielles présentent plusieurs propriétés qui les rendent remarquables. Tout d&amp;rsquo;abord, la précision temporelle des événements est de l&amp;rsquo;ordre de la microseconde, ce qui permet d&amp;rsquo;atteindre une cadence théorique de l&amp;rsquo;ordre du million d&amp;rsquo;images par seconde. On peut la comparer à celle d&amp;rsquo;une caméra classique, qui est de l&amp;rsquo;ordre de la centaine d&amp;rsquo;images par seconde, ou à celle d&amp;rsquo;une caméra à grande vitesse, qui peut atteindre 10 000 images par seconde. Il est difficile d&amp;rsquo;estimer la fréquence d&amp;rsquo;échantillonnage de la perception humaine, car si 25 images par seconde sont souvent suffisantes pour visionner un film, il a été démontré que l&amp;rsquo;œil humain peut distinguer des détails temporels jusqu&amp;rsquo;à la milliseconde.&lt;/p&gt;
&lt;p&gt;Une autre caractéristique importante de ces caméras est leur capacité à détecter une très large gamme de luminosité, dépassant de loin celle des caméras conventionnelles à 120 dB (un facteur d&amp;rsquo;un million, comparé au facteur de un sur mille de l&amp;rsquo;œil humain entre la pleine lune et le soleil),&lt;/p&gt;
&lt;p&gt;Il convient de noter que la résolution spatiale de ces caméras est souvent relativement modeste, de l&amp;rsquo;ordre du mégapixel. Cependant, il ne s&amp;rsquo;agit pas d&amp;rsquo;une limitation technique, mais plutôt d&amp;rsquo;une conséquence des applications technologiques dans lesquelles ces caméras sont couramment utilisées.&lt;/p&gt;
&lt;p&gt;Par rapport aux caméras classiques, qui consomment plusieurs watts, les caméras événementielles consomment très peu d&amp;rsquo;énergie électrique, de l&amp;rsquo;ordre de 10 milliwatts, soit une consommation équivalente à celle de l&amp;rsquo;œil humain.
&lt;a href="https://en.wikipedia.org/wiki/Event_camera#Functional_description" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Event_camera#Functional_description&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-2"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure id="figure-the-hd-snn-neural-network-grimaldi-et-al-2023httpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="The HD-SNN neural network [[Grimaldi *et al*, 2023]](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Ces caméras ne présentent que des avantages, mais alors, comment traiter cette nouvelle représentation des données ? En effet, les neurosciences montrent que les neurones ne manipulent pas des données continues (comme ceux du deep learning), mais communiquent exactement de la même manière en échangeant de brèves impulsions prototypiques, les potentiels d’action (spikes).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Notre solution : une architecture similaire au deep learning, mais chaque neurone (brique élémentaire) est un modèle simplifié de neurone biologique impulsionnel. Cependant, nous nous retrouvons avec un problème par rapport à l’établissement que nous avons réussi à résoudre théoriquement. Un avantage supplémentaire est que ce genre de calcul est actuellement développé sur des puces embarquées (comme les pixels de la caméra évanementielle).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;notre architecture fonctionne ainsi directement sur cette même représentation. Un autre avantage : le « always on computing ».&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Quels résultats ? Peut-on les évaluer avant d&amp;rsquo;avoir ces puces ?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-3"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network-grimaldi-et-al-2023httpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network [[Grimaldi *et al*, 2023]](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Time-to-Contact maps &lt;a href="https://laurentperrinet.github.io/publication/nunes-23-iccv" target="_blank" rel="noopener"&gt;[Nunes &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Nos simulations montrent ainsi une très grande efficacité (ici pour catégoriser un type de flux optique, ce qui peut guider la navigation).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;un aspect innovant de notre technologie réside dans notre capacité à utiliser autant de neurones, mais moins de connexions. Nous avons par ailleurs montré que l’efficacité restait acceptable. Par rapport à une technologie classique (en orange) qui montre une baisse rapide, nos résultats montrent une bonne efficacité avec une demi-valeur critique donnée pour un gain de 700x (noter l’axe log). C’est ce qu’on appelle le « frugal computing » et nous œuvrons maintenant à son implémentation dans un PEPR IA.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;c’est une étape importante, mais on peut aller plus loin, et je vais vous présenter un deuxième levier : éviter de tout traiter pour ne traiter que ce qui est nécessaire.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="levier-2-vision-active--active-vision"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-24-ccn/featured.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Pour cela, je vais d’abord l’illustrer par le travail du chercheur russe Yarbus au début du siècle dernier. Lorsqu’on présente une scène visuelle à un observateur (comme dans le cas de cette peinture sur le panneau A) – celui-ci va effectuer une série de sauts dans cette image, qu’on appelle saccades.&lt;/p&gt;
&lt;p&gt;En effet, notre vision possède cette propriété d’être focalisée, de telle sorte qu’une majeure partie de notre vision est concentrée suivant notre axe de vision. Cette propriété a co-évolué avec la capacité à effectuer des mouvements rapides des yeux et confère un avantage évolutif aux prédateurs qui peuvent agir plus rapidement sur leur environnement pour attraper une proie.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-1"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/featured.jpg" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25/)]" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25/" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Cette capacité d’agir sur l’entrée sensorielle, et notamment d’avoir une capacité attentionnelle de cette sorte, est largement absente des approches classiques de l’apprentissage machine et nous avons pu l’implanter grâce au projet ANR.&lt;/p&gt;
&lt;p&gt;Pour cela, nous avons utilisé une transformée de type log-polaire qui concentre l’information autour de l’axe de vision, comme on peut le voir à l’intérieur de la zone matérialisée par la zone grise. Notez également l’importance du point sur lequel se pose le regard, notamment s&amp;rsquo;il est éloigné ou proche de l’objet d’intérêt.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-2"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/fig_attack_rotation_imagenet.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25/" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;de façon surprenante, malgré la perte de résolution en périphérie, nous obtenons des résultats comparables à l’état de l’art, mais plus robustes aux rotations et zooms.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;il est important de noter qu’il peut traiter des images arbitraires en taille, ce qui constitue une limite importante des CNNs actuels.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Une perspective en cours est d’abord d’adapter cette capacité aux SNN, mais aussi&amp;hellip;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-3"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/fig_areadne.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25/" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;d’inclure des saccades, c’est-à-dire de compléter le système que je viens de présenter et qui permet d’identifier des objets dans une image, par un système qui permet d’anticiper ou de regarder dans une image.
Cette division du travail est inspirée des voies pariétales et dorsales du système visuel chez l&amp;rsquo;être humain.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;PEPR IA : les multiples saccades et l&amp;rsquo;attention&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;comment intégrer ces deux leviers dans un système embarqué ?&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2&gt;&lt;u&gt;
[2024-11-18] NeuroAI: interactions multiples entre Neurosciences et Intelligence artificielle
&lt;/u&gt;&lt;/h2&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;a href="https://laurentperrinet.github.io/grant/anr-anr"&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/header.png" alt="header" height="300"&gt;
&lt;/a&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2024-11-18-journee-biomometisme/?transition=fade"&gt; &lt;i&gt; Laurent Perrinet &lt;/i&gt; &lt;/a&gt; - &lt;a href="https://laurentperrinet.github.io"&gt;https://laurentperrinet.github.io&lt;/a&gt;
&lt;br&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/featured.png" alt="ANR" height="80" width="80"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;résumé : l&amp;rsquo;IA embarquée implique des enjeux importants.&lt;/li&gt;
&lt;li&gt;les neurosciences peuvent apporter une contribution majeure pour résoudre les enjeux de l&amp;rsquo;IA embarquée.&lt;/li&gt;
&lt;li&gt;un objectif : acquérir une indépendance scientifique = projet « Active Loop » pour lequel je cherche des partenaires.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2024-09-09-agileneurobot-anr</title><link>https://laurentperrinet.github.io/slides/2024-09-09-agileneurobot-anr/</link><pubDate>Mon, 09 Sep 2024 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2024-09-09-agileneurobot-anr/</guid><description>&lt;section&gt;
&lt;a href="https://laurentperrinet.github.io/grant/anr-anr"&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/header.png" alt="header" height="450"&gt;
&lt;/a&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;i&gt; Laurent Perrinet (&lt;a href="https://laurentperrinet.github.io"&gt;https://laurentperrinet.github.io&lt;/a&gt;)&lt;/i&gt;
&lt;br&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2024-09-09-agileneurobot-anr/?transition=fade"&gt;
&lt;u&gt;[2024-09-09] Enjeux pour l'IA embarquée&lt;/u&gt;
&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/featured.png" alt="ANR" height="80"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;outline =&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Bonjour. Je suis Laurent Perrinet, directeur de recherche CNRS en neurosciences computationnelles à l&amp;rsquo;Institut des neurosciences de la Timone à Marseille. Je vous remercie pour cette invitation à participer à cette table ronde sur l&amp;rsquo;IA embarquée dans le domaine spatial. Je suis moi-même un passionné d&amp;rsquo;aéronautique et de spatial, ce qui m&amp;rsquo;a amené à suivre l&amp;rsquo;école d&amp;rsquo;aéronautique SUPAERO. Puis vers l’imagerie satellitaire, qui dépendait déjà de l&amp;rsquo;IA sous la forme des réseaux de neurones. C&amp;rsquo;est à partir de là, grâce à la rencontre avec mon professeur de mathématiques Manuel Samuelides, que j&amp;rsquo;ai découvert les neurosciences computationnelles et les pouvoirs qu&amp;rsquo;elles peuvent offrir pour mieux comprendre le cerveau et pour créer de nouveaux systèmes d’intelligence artificielle.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Les neurosciences computationnelles sont les sciences qui essaient d’extraire de nos connaissances en neurosciences biologiques des principes computationnels, comme le neurone formel et sa capacité d’apprentissage, qui est la brique de base des réseaux de neurones. Ces derniers ont conduit à la révolution de l’IA avec les réseaux profonds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Je suis convaincu que nous sommes au tournant d&amp;rsquo;une nouvelle ère dans le développement des systèmes embarqués, où l&amp;rsquo;intelligence artificielle a le potentiel de créer des innovations disruptives à la hauteur des performances de l’intelligence naturelle et pour lesquelles il est essentiel de s&amp;rsquo;inspirer des neurosciences biologiques.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;C&amp;rsquo;est pourquoi je suis très heureux de vous présenter en premier lieu le projet ANR AgileNeuRobot, un projet de recherche interdisciplinaire visant à développer des robots aériens agiles bio-mimétiques pour le vol en conditions réelles.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Dans cette optique, afin de caractériser certains enjeux de l&amp;rsquo;IA embarquée, notamment dans le domaine du spatial, je vais vous présenter deux leviers s&amp;rsquo;inspirant de la biologie et illustrant comment les neurosciences peuvent faire avancer le domaine de façon radicale. L&amp;rsquo;intégration de connaissances biomimétiques dans les engins spatiaux peut améliorer leur résilience et leur adaptabilité face aux environnements hostiles, tout en réduisant la consommation d&amp;rsquo;énergie. Je serais ravi d&amp;rsquo;engager ensuite une discussion avec vous sur ces sujets et d&amp;rsquo;échanger sur vos propres expériences et perspectives.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="agileneurobot-fiche-didentité"&gt;AgileNeuRobot: Fiche d&amp;rsquo;identité&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Titre : Robots aériens agiles bio-mimetiques pour le vol en conditions réelles&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Title : Bio-mimetic agile aerial robots flying in real-life conditions&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;CES : CE23 - Intelligence Artificielle (ANR-20-CE23-0021)&lt;/li&gt;
&lt;li&gt;Durée: 4 ans, du 1er Octobre 2021 au 30 Septembre 2025&lt;/li&gt;
&lt;li&gt;Budget total: 435 k€&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
Le projet ANR AgileNeuRobot est donc un projet interdisciplinaire financé par l&amp;rsquo;Agence Nationale de la Recherche (ANR) dans le cadre de l&amp;rsquo;appel à projets « Intelligence Artificielle » (ANR-20-CE23-0021). Il vise à développer des robots aériens agiles bio-mimétiques pour le vol en conditions réelles sur une période de 4 ans, du 1er octobre 2021 au 30 septembre 2025. Il est financé à hauteur de 435 k€ et représente un exemple convaincant de l&amp;rsquo;impact potentiel des neurosciences computationnelles sur les systèmes embarqués dans le domaine des robots aériens autonomes.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="agileneurobot-consortium"&gt;AgileNeuRobot: Consortium:&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/stéphane-viollet/avatar.jpg" alt="SV" height="150"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/ryad-benosman/avatar.jpg" alt="RB" height="150"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/laurent-u-perrinet/avatar.png" alt="LP" height="150"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Stéphane Viollet&lt;/td&gt;
&lt;td&gt;Ryad Benosman&lt;/td&gt;
&lt;td&gt;Laurent Perrinet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Julien Diperi&lt;/td&gt;
&lt;td&gt;Sio-Hoï Ieng&lt;/td&gt;
&lt;td&gt;Emmanuel Daucé&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Post-doc 1&lt;/td&gt;
&lt;td&gt;Post-doc 2&lt;/td&gt;
&lt;td&gt;PhD (&lt;a href="https://laurentperrinet.github.io/author/jean-nicolas-j%C3%A9r%C3%A9mie/" target="_blank" rel="noopener"&gt;JN Jérémie&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Inst Sciences Mouvement&lt;/td&gt;
&lt;td&gt;Inst de la Vision&lt;/td&gt;
&lt;td&gt;Inst Neurosci de la Timone&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Le projet AgileNeuRobot est un projet que je coordonne en collaboration avec plusieurs institutions :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;l&amp;rsquo;Inst Sciences du Mouvement pour la partie robotique bio-inspirée,&lt;/li&gt;
&lt;li&gt;l&amp;rsquo;Institut de la Vision pour le développement de nouveaux capteurs.&lt;/li&gt;
&lt;li&gt;l&amp;rsquo;Institut de Neurosciences de la Timone (Aix-Marseille Université) pour l’aspect théorique et l&amp;rsquo;intégration de ces disciplines.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ensemble, nous travaillons à la fois sur les aspects techniques et scientifiques pour créer ces robots aériens. Ce projet a pour but de contribuer non seulement au développement de nouvelles technologies, mais aussi à la compréhension et à l&amp;rsquo;élaboration de nouveaux modèles théoriques pour expliquer les mécanismes naturels sous-jacents aux capacités d&amp;rsquo;adaptabilité et d&amp;rsquo;apprentissage des systèmes biologiques.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="agileneurobot-agile--performant-et-efficace"&gt;AgileNeuRobot: Agile = Performant et efficace&lt;/h2&gt;
&lt;figure id="figure-the-system-includes-3-units-to-process-event-driven-visual-inputs-communicating-by-feed-forward-and-feed-back-paths"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/principe_agile.jpg" alt="The system includes 3 units to process event-driven visual inputs communicating by feed-forward and feed-back paths." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
The system includes 3 units to process event-driven visual inputs communicating by feed-forward and feed-back paths.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Le système en développement est un exemple de robotique bio-inspirée. Il est conçu pour être capable de traiter des données visuelles en temps réel et de réagir rapidement aux changements de l&amp;rsquo;environnement, notamment pour éviter ou intercepter des objets en vol.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;performance : garder une bonne acuité tout en répondant rapidement et presque immédiatement.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;efficacité : des besoins réduits en énergie pour un fonctionnement autonome.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Pour cela, nous avons utilisé une architecture inspirée des insectes qui combine des capteurs événementiels avec des réseaux de neurones impulsionnels pour créer un système agile et performant, que je vais décrire dans la suite de l’exposé.&lt;/p&gt;
&lt;p&gt;Mais d&amp;rsquo;abord, je voudrais souligner deux contraintes majeures de ce type de systèmes embarqués :&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="enjeux-de-lia-embarquée--latence-de-réponse"&gt;Enjeux de l&amp;rsquo;IA embarquée : latence de réponse&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-grimaldi-et-al-2022httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies [[Grimaldi *et al*, 2022]](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2022]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Tout d’abord, les systèmes sensoriels biologiques sont composés de séquences de traitement qui possèdent des délais de traitement. Je décris ici la chaîne de traitement d’une image visuelle, ici pour un enfant jouant à un jeu et devant cliquer sur le bon bouton, et qui illustre les différentes latences du traitement de l’information de la vision à l’action.&lt;/p&gt;
&lt;p&gt;Si les délais dans un système embarqué sont plus rapides, il reste que les informations dans les différentes étapes de traitement peuvent être décalées et nécessitent un traitement adapté afin de répondre de la façon la plus immédiate possible. Je pense notamment à la détection d&amp;rsquo;objets en mouvement très rapide dans le cadre spatial.&lt;/p&gt;
&lt;p&gt;&amp;mdash;-&amp;gt; Collapse Kessler&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="enjeux-de-lia-embarquée--budget-énergétique"&gt;Enjeux de l&amp;rsquo;IA embarquée : budget énergétique&lt;/h2&gt;
&lt;figure id="figure-prototype-avec-caméra-événementielle-et-calculateur"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/prototype.jpg" alt="Prototype avec caméra événementielle et calculateur." loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Prototype avec caméra événementielle et calculateur.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Deuxième contrainte liée à la première : la consommation énergétique.&lt;/p&gt;
&lt;p&gt;Je vous présente ici une photo de notre premier prototype qui inclut, en plus des équipements classiques d&amp;rsquo;un robot aérien (capteurs de hauteur, accéléromètres, calculateur de navigation), différentes caméras ainsi qu’un calculateur dédié.&lt;/p&gt;
&lt;p&gt;Il faut comprendre que ces équipements additionnels consomment une énergie non négligeable. Cela implique de dimensionner correctement la batterie, ce qui a pour effet d&amp;rsquo;augmenter les besoins énergétiques pour le vol lui-même.&lt;/p&gt;
&lt;p&gt;Je vais proposer deux leviers, inspirés de la biologie, pour faire avancer le domaine de façon radicale (pas juste gagner 30 %), mais pour passer à une autre échelle.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Nouvelles caméras : basées sur la même technologie qu’un CMOS, mais au lieu de récolter à intervalles réguliers l’ensemble des valeurs de luminance sur tous les pixels, chaque pixel est indépendant.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;le mode de représentation de l&amp;rsquo;information est différent : le signal consiste à émettre un événement si et seulement si un changement a été observé par ce pixel, ce qui est représenté ici par ces flux d’événements.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-1"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sensor&lt;/th&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Framerate&lt;/th&gt;
&lt;th&gt;Resolution&lt;/th&gt;
&lt;th&gt;Power&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Human eye&lt;/td&gt;
&lt;td&gt;60 (?) dB&lt;/td&gt;
&lt;td&gt;300 (?) fps&lt;/td&gt;
&lt;td&gt;100 (?) Mpx&lt;/td&gt;
&lt;td&gt;10 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DSLR&lt;/td&gt;
&lt;td&gt;44.6 dB&lt;/td&gt;
&lt;td&gt;120 fps&lt;/td&gt;
&lt;td&gt;2&amp;ndash;20 Mpx&lt;/td&gt;
&lt;td&gt;30 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ultra-high speed&lt;/td&gt;
&lt;td&gt;64 dB&lt;/td&gt;
&lt;td&gt;10^4 fps&lt;/td&gt;
&lt;td&gt;0.3&amp;ndash;4 Mpx&lt;/td&gt;
&lt;td&gt;300 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event-based&lt;/td&gt;
&lt;td&gt;120 dB&lt;/td&gt;
&lt;td&gt;10^6 fps&lt;/td&gt;
&lt;td&gt;0.1&amp;ndash;2 Mpx&lt;/td&gt;
&lt;td&gt;30 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Les caméras événementielles présentent plusieurs propriétés qui les rendent remarquables. Tout d&amp;rsquo;abord, la précision temporelle des événements est de l&amp;rsquo;ordre de la microseconde, ce qui permet d&amp;rsquo;atteindre une cadence théorique de l&amp;rsquo;ordre du million d&amp;rsquo;images par seconde. On peut la comparer à celle d&amp;rsquo;une caméra classique, qui est de l&amp;rsquo;ordre de la centaine d&amp;rsquo;images par seconde, ou à celle d&amp;rsquo;une caméra à grande vitesse, qui peut atteindre 10 000 images par seconde. Il est difficile d&amp;rsquo;estimer la fréquence d&amp;rsquo;échantillonnage de la perception humaine, car si 25 images par seconde sont souvent suffisantes pour visionner un film, il a été démontré que l&amp;rsquo;œil humain peut distinguer des détails temporels jusqu&amp;rsquo;à la milliseconde.&lt;/p&gt;
&lt;p&gt;Une autre caractéristique importante de ces caméras est leur capacité à détecter une très large gamme de luminosité, dépassant de loin celle des caméras conventionnelles à 120 dB (un facteur d&amp;rsquo;un million, comparé au facteur de un sur mille de l&amp;rsquo;œil humain entre la pleine lune et le soleil),&lt;/p&gt;
&lt;p&gt;Il convient de noter que la « résolution spatiale » de ces caméras est souvent relativement modeste, de l&amp;rsquo;ordre du mégapixel. Cependant, il ne s&amp;rsquo;agit pas d&amp;rsquo;une limitation technique, mais plutôt d&amp;rsquo;une conséquence des applications technologiques dans lesquelles ces caméras sont couramment utilisées.&lt;/p&gt;
&lt;p&gt;Par rapport aux caméras classiques, qui consomment plusieurs watts, les caméras événementielles consomment très peu d&amp;rsquo;énergie électrique, de l&amp;rsquo;ordre de 10 milliwatts, soit une consommation équivalente à celle de l&amp;rsquo;œil humain.
&lt;a href="https://en.wikipedia.org/wiki/Event_camera#Functional_description" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Event_camera#Functional_description&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-2"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure id="figure-the-hd-snn-neural-network-grimaldi-et-al-2023httpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="The HD-SNN neural network [[Grimaldi *et al*, 2023]](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Ces caméras ne présentent que des avantages, mais alors, comment traiter cette nouvelle représentation des données ? En effet, les neurosciences montrent que les neurones ne manipulent pas des données continues (comme ceux du deep learning), mais communiquent exactement de la même manière en échangeant de brèves impulsions prototypiques, les potentiels d’action (spikes).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Notre solution : une architecture similaire au deep learning, mais chaque neurone (brique élémentaire) est un modèle simplifié de neurone biologique impulsionnel. Cependant, nous nous retrouvons avec un problème par rapport à l’établissement que nous avons réussi à résoudre théoriquement. Un avantage supplémentaire est que ce genre de calcul est actuellement développé sur des puces embarquées (comme les pixels de la caméra évanementielle).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;notre architecture fonctionne ainsi directement sur cette même représentation. Un autre avantage : le « always on computing ».&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Quels résultats ? Peut-on les évaluer avant d&amp;rsquo;avoir ces puces ?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-1-réseaux-de-neurones-impulsionnels-snns-3"&gt;Levier #1: Réseaux de neurones impulsionnels (SNNs)&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network-grimaldi-et-al-2023httpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network [[Grimaldi *et al*, 2023]](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Time-to-Contact maps &lt;a href="https://laurentperrinet.github.io/publication/nunes-23-iccv" target="_blank" rel="noopener"&gt;[Nunes &lt;em&gt;et al&lt;/em&gt;, 2023]&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Nos simulations montrent ainsi une très grande efficacité (ici pour catégoriser un type de flux optique, ce qui peut guider la navigation).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;un aspect innovant de notre technologie réside dans notre capacité à utiliser autant de neurones, mais moins de connexions. Nous avons par ailleurs montré que l’efficacité restait acceptable. Par rapport à une technologie classique (en orange) qui montre une baisse rapide, nos résultats montrent une bonne efficacité avec une demi-valeur critique donnée pour un gain de 700x (noter l’axe log). C’est ce qu’on appelle le « frugal computing » et nous œuvrons maintenant à son implémentation dans un PEPR IA.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;c’est une étape importante, mais on peut aller plus loin, et je vais vous présenter un deuxième levier : éviter de tout traiter pour ne traiter que ce qui est nécessaire.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="levier-2-vision-active--active-vision"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-24-ccn/featured.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Pour cela, je vais d’abord l’illustrer par le travail du chercheur russe Yarbus au début du siècle dernier. Lorsqu’on présente une scène visuelle à un observateur (comme dans le cas de cette peinture sur le panneau A) – celui-ci va effectuer une série de sauts dans cette image, qu’on appelle saccades.&lt;/p&gt;
&lt;p&gt;En effet, notre vision possède cette propriété d’être focalisée, de telle sorte qu’une majeure partie de notre vision est concentrée suivant notre axe de vision. Cette propriété a co-évolué avec la capacité à effectuer des mouvements rapides des yeux et confère un avantage évolutif aux prédateurs qui peuvent agir plus rapidement sur leur environnement pour attraper une proie.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-1"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/featured.jpg" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25/)]" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25/" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Cette capacité d’agir sur l’entrée sensorielle, et notamment d’avoir une capacité attentionnelle de cette sorte, est largement absente des approches classiques de l’apprentissage machine et nous avons pu l’implanter grâce au projet ANR.&lt;/p&gt;
&lt;p&gt;Pour cela, nous avons utilisé une transformée de type log-polaire qui concentre l’information autour de l’axe de vision, comme on peut le voir à l’intérieur de la zone matérialisée par la zone grise. Notez également l’importance du point sur lequel se pose le regard, notamment s&amp;rsquo;il est éloigné ou proche de l’objet d’intérêt.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-2"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/fig_attack_rotation_imagenet.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;de façon surprenante, malgré la perte de résolution en périphérie, nous obtenons des résultats comparables à l’état de l’art, mais plus robustes aux rotations et zooms.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;il est important de noter qu’il peut traiter des images arbitraires en taille, ce qui constitue une limite importante des CNNs actuels.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Une perspective en cours est d’abord d’adapter cette capacité aux SNN, mais aussi&amp;hellip;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="levier-2-vision-active--active-vision-3"&gt;Levier #2: Vision active / &lt;em&gt;Active Vision&lt;/em&gt;&lt;/h2&gt;
&lt;figure id="figure-jérémie-et-al-2024httpslaurentperrinetgithubiopublicationjeremie-25"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/jeremie-25/fig_areadne.png" alt="[[Jérémie *et al*, 2024](https://laurentperrinet.github.io/publication/jeremie-25)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-25" target="_blank" rel="noopener"&gt;Jérémie &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTES&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;d’inclure des saccades, c’est-à-dire de compléter le système que je viens de présenter et qui permet d’identifier des objets dans une image, par un système qui permet d’anticiper ou de regarder dans une image.
Cette division du travail est inspirée des voies pariétales et dorsales du système visuel chez l&amp;rsquo;être humain.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;PEPR IA : les multiples saccades et l&amp;rsquo;attention&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;comment intégrer ces deux leviers dans un système embarqué ?&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;a href="https://laurentperrinet.github.io/grant/anr-anr"&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/header.png" alt="header" height="450"&gt;
&lt;/a&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;i&gt; Laurent Perrinet (&lt;a href="https://laurentperrinet.github.io"&gt;https://laurentperrinet.github.io&lt;/a&gt;)&lt;/i&gt;
&lt;br&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2024-09-09-agileneurobot-anr/?transition=fade"&gt;
&lt;u&gt;[2024-09-09] Enjeux pour l'IA embarquée&lt;/u&gt;
&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/featured.png" alt="ANR" height="80"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;résumé : l&amp;rsquo;IA embarquée implique des enjeux importants.&lt;/li&gt;
&lt;li&gt;les neurosciences peuvent apporter une contribution majeure pour résoudre les enjeux de l&amp;rsquo;IA embarquée.&lt;/li&gt;
&lt;li&gt;un objectif : acquérir une indépendance scientifique = projet « Active Loop » pour lequel je cherche des partenaires.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2024-05-13-master-m-4-nc</title><link>https://laurentperrinet.github.io/slides/2024-05-13-master-m-4-nc/</link><pubDate>Mon, 13 May 2024 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2024-05-13-master-m-4-nc/</guid><description>&lt;section&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-05-13-master-m-4-nc/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;h3 id="master-m4nc-de-l"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2024-05-13-master-m-4-nc/" target="_blank" rel="noopener"&gt;[2024-05-13]&lt;/a&gt;&lt;a href="https://neuromod.univ-cotedazur.eu" target="_blank" rel="noopener"&gt;Master M4NC de l&amp;rsquo;institut NeuroMod, cours Prospective Innovation and Research&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;outline =&lt;/li&gt;
&lt;li&gt;fact: paradoxically vision is a complex process for the simplest function&lt;/li&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="principles-of-vision"&gt;Principles of Vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cut in different levels: Marr (+ Poggio)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;arbitrary, but useful division of labor= computational / algorithm / hardware&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dynamics (computational)&lt;/li&gt;
&lt;li&gt;CNNs (hardware)&lt;/li&gt;
&lt;li&gt;spiking (algorithm)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First: What is the function of vision?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-ilya-repin-1884httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[An Unexpected Visitor (Ilya Repin, 1884)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Ilya Repin, 1884)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;seeing= interacting with the visual world&lt;/li&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-1"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[An Unexpected Visitor (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: the eye is always moving&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Alfred_Iarbous" target="_blank" rel="noopener"&gt;https://fr.wikipedia.org/wiki/Alfred_Iarbous&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;consistency of eye traces&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-2"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---age-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_003.jpg" alt="[An Unexpected Visitor - *Age?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-3"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---how-long-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_006.jpg" alt="[An Unexpected Visitor - *How long?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;How long?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: depends on task&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Visual illusions are a great way to understand the constraints of vision&lt;/li&gt;
&lt;li&gt;notce that here the illusion depend on your eye movements&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a simpler one showing effect of context&lt;/li&gt;
&lt;li&gt;here the ever changing lighting conditions from moonlight (1 candela) to sunlight (100 000 candela)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;the process of inverting the reason of an illusion can be intriguing&lt;/li&gt;
&lt;li&gt;hering: two parallel lines&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;appear bent&lt;/li&gt;
&lt;li&gt;effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-4"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;more generally it reveals vision generates a model of the world&lt;/li&gt;
&lt;li&gt;pareidolia: seeing faces in clouds, or a man on mars&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;30 years later&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip; it&amp;rsquo;s just a rock&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="principles-of-vision-1"&gt;Principles of vision?&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we know more about the function&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;let&amp;rsquo;s delve into a computational theory of vision&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="computational-neuroscience-of-vision-1"&gt;Computational neuroscience of vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it&amp;rsquo;s a multi-scale, complex model&amp;hellip;&lt;/li&gt;
&lt;li&gt;perhaps we will never be able to comprehend it in full&lt;/li&gt;
&lt;li&gt;words are not precise enough, let&amp;rsquo;s use mathematics and models to describe this system&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-human-visual-system"&gt;Anatomy of the Human Visual system&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s start with the anatomy&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="human-visual-system--the-hmax-model"&gt;Human Visual system : the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2007httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2007](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)]" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;Serre and Poggio, 2007&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and a model of it&amp;hellip;&lt;/li&gt;
&lt;li&gt;CNN, the mother of all deep learning models&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex"&gt;Primary visual cortex&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s zoom in, the basic ingredient is the receptive field&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-1"&gt;Primary visual cortex&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-nets-cnn"&gt;Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;this can be integrated in a hierarchy&amp;hellip;&lt;/li&gt;
&lt;li&gt;defining a Convolutional Neural Networks (CNN)&lt;/li&gt;
&lt;li&gt;one layer is a convolution&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-nets-cnn-1"&gt;Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;One-dimensional &lt;a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" target="_blank" rel="noopener"&gt;discrete convolution&lt;/a&gt; (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and be formalized as a convolution&amp;hellip;&lt;/li&gt;
&lt;li&gt;but what is a convolution?&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s start in 1D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-1"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now in 2D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-2"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cross-correlation&lt;/strong&gt; of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;note the difference between convolutions and cross-correlation&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-3"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it is a translation-invariant feature detector&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-4"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of an image defined on several channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{c,i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we can add different channels to the image (eg colors)&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-5"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of a multi-channel image for multiple output channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[k, x, y] = \sum_{c,i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now we get to the full CNN&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-the-hmax-model"&gt;CNN: the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-challenges"&gt;CNN: challenges&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-predictive-processing"&gt;CNN: Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-predictive-processing-1"&gt;CNN: Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-topography"&gt;CNN: Topography&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;topography?&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-topography-1"&gt;CNN: Topography&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= bio-mimetism&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="computational-neuroscience-of-vision-2"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;neuroAI&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="dynamics-of-vision"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;another important missing feature: time&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-1"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency_bg.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In particular in our group, we are interested in dynamics of neural processing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the process of categorizing an object takes 10 layers&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-2"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-3"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-4"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston, 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-5"&gt;Dynamics of vision&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-6"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-7"&gt;Dynamics of vision&lt;/h2&gt;
&lt;!--
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Flash-lag effect: MBP (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;)&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="dynamics-of-vision-8"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks-snn"&gt;Spiking Neural Networks (SNN)&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-leaky-integrate-and-fire-neuron"&gt;SNN: Leaky Integrate-and-Fire Neuron&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-1"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-2"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-3"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Izhikevich polychronization&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;yet the domain is vast, and there s lot to do in SNNs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs-1"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs-2"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;event-based cameras&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-1"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-2"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-3"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nice kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-4"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="spiking-neural-networks-snn-1"&gt;Spiking Neural Networks (SNN)&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="artificial-neural-networks-applied-to-the-understanding-of-biological-vision"&gt;Artificial neural networks applied to the understanding of biological vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-05-13-master-m-4-nc/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;h3 id="-master-m4nc-de-l"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2024-05-13-master-m-4-nc/" target="_blank" rel="noopener"&gt;[2024-05-13]&lt;/a&gt; &lt;a href="https://neuromod.univ-cotedazur.eu" target="_blank" rel="noopener"&gt;Master M4NC de l&amp;rsquo;institut NeuroMod, cours Prospective Innovation and Research&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&lt;/li&gt;
&lt;li&gt;outline&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2024-04-10-ue-neurosciences-computationnelles</title><link>https://laurentperrinet.github.io/slides/2024-04-10-ue-neurosciences-computationnelles/</link><pubDate>Wed, 10 Apr 2024 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2024-04-10-ue-neurosciences-computationnelles/</guid><description>&lt;section&gt;
&lt;h1 id="artificial-neural-networks-applied-to-the-understanding-of-biological-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-04-10-ue-neurosciences-computationnelles/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h3 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h3&gt;
&lt;h3 id="-master-1-neurosciences-et-sciences-cognitives"&gt;&lt;u&gt;&lt;a href="https://ametice.univ-amu.fr/course/view.php?id=95116" target="_blank" rel="noopener"&gt;[2024-04-10]&lt;/a&gt; &lt;a href="https://sciences.univ-amu.fr/fr/formation/masters/master-neurosciences" target="_blank" rel="noopener"&gt;Master 1 Neurosciences et Sciences Cognitives.&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;outline =&lt;/li&gt;
&lt;li&gt;fact: paradoxically vision is a complex process for the simplest function&lt;/li&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="principles-of-vision"&gt;Principles of Vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cut in different levels: Marr (+ Poggio)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;arbitrary, but useful division of labor= computational / algorithm / hardware&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dynamics (computational)&lt;/li&gt;
&lt;li&gt;CNNs (hardware)&lt;/li&gt;
&lt;li&gt;spiking (algorithm)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;First: What is the function of vision?&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-ilya-repin-1884httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[An Unexpected Visitor (Ilya Repin, 1884)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Ilya Repin, 1884)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;seeing= interacting with the visual world&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-1"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[An Unexpected Visitor (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: the eye is always moving&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Alfred_Iarbous" target="_blank" rel="noopener"&gt;https://fr.wikipedia.org/wiki/Alfred_Iarbous&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;consistency of eye traces&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-2"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---age-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_003.jpg" alt="[An Unexpected Visitor - *Age?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-3"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---how-long-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_006.jpg" alt="[An Unexpected Visitor - *How long?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;How long?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: depends on task&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Visual illusions are a great way to understand the constraints of vision&lt;/li&gt;
&lt;li&gt;notce that here the illusion depend on your eye movements&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a simpler one showing effect of context&lt;/li&gt;
&lt;li&gt;here the ever changing lighting conditions from moonlight (1 candela) to sunlight (100 000 candela)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;the process of inverting the reason of an illusion can be intriguing&lt;/li&gt;
&lt;li&gt;hering: two parallel lines&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;h2 id="hahahugoshortcode400s18hbhb"&gt;&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;appear bent&lt;/li&gt;
&lt;li&gt;effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;&lt;/h2&gt;
&lt;h2 id="visual-illusions--pareidolia"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;more generally it reveals vision generates a model of the world&lt;/li&gt;
&lt;li&gt;pareidolia: seeing faces in clouds, or a man on mars&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;30 years later&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip; it&amp;rsquo;s just a rock&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="principles-of-vision-1"&gt;Principles of vision?&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we know more about the function&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;let&amp;rsquo;s delve into a computational theory of vision&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="computational-neuroscience-of-vision-1"&gt;Computational neuroscience of vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it&amp;rsquo;s a multi-scale, complex model&amp;hellip;&lt;/li&gt;
&lt;li&gt;perhaps we will never be able to comprehend it in full&lt;/li&gt;
&lt;li&gt;words are not precise enough, let&amp;rsquo;s use mathematics and models to describe this system&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-human-visual-system"&gt;Anatomy of the Human Visual system&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s start with the anatomy&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="human-visual-system--the-hmax-model"&gt;Human Visual system : the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2007httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2007](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)]" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;Serre and Poggio, 2007&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and a model of it&amp;hellip;&lt;/li&gt;
&lt;li&gt;CNN, the mother of all deep learning models&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex"&gt;Primary visual cortex&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;let&amp;rsquo;s zoom in, the basic ingredient is the receptive field&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-1"&gt;Primary visual cortex&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;a single neuron is selective to some visual features&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-nets-cnn"&gt;Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;this can be integrated in a hierarchy&amp;hellip;&lt;/li&gt;
&lt;li&gt;defining a Convolutional Neural Networks (CNN)&lt;/li&gt;
&lt;li&gt;one layer is a convolution&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-nets-cnn-1"&gt;Convolutional Neural Nets (CNN)&lt;/h2&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;One-dimensional &lt;a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" target="_blank" rel="noopener"&gt;discrete convolution&lt;/a&gt; (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and be formalized as a convolution&amp;hellip;&lt;/li&gt;
&lt;li&gt;but what is a convolution?&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s start in 1D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-1"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now in 2D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-2"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cross-correlation&lt;/strong&gt; of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;note the difference between convolutions and cross-correlation&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-3"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it is a translation-invariant feature detector&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-4"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of an image defined on several channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{c,i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we can add different channels to the image (eg colors)&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-mathematics-5"&gt;CNN: Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of a multi-channel image for multiple output channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[k, x, y] = \sum_{c,i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now we get to the full CNN&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-the-hmax-model"&gt;CNN: the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-challenges"&gt;CNN: challenges&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-predictive-processing"&gt;CNN: Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-predictive-processing-1"&gt;CNN: Predictive processing&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-topography"&gt;CNN: Topography&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;topography?&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="cnn-topography-1"&gt;CNN: Topography&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= bio-mimetism&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="computational-neuroscience-of-vision-2"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;neuroAI&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="dynamics-of-vision"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;another important missing feature: time&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-1"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency_bg.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In particular in our group, we are interested in dynamics of neural processing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the process of categorizing an object takes 10 layers&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-2"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-3"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-4"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston, 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-5"&gt;Dynamics of vision&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-6"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-7"&gt;Dynamics of vision&lt;/h2&gt;
&lt;!--
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Flash-lag effect: MBP (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;)&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="dynamics-of-vision-8"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks-snn"&gt;Spiking Neural Networks (SNN)&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-leaky-integrate-and-fire-neuron"&gt;SNN: Leaky Integrate-and-Fire Neuron&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-1"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-2"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neurobiology-3"&gt;SNN in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Izhikevich polychronization&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;yet the domain is vast, and there s lot to do in SNNs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs-1"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-spiking-motifs-2"&gt;SNN: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;event-based cameras&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-1"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-2"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-3"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nice kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="snn-in-neuromorphic-engineering-4"&gt;SNN in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="spiking-neural-networks-snn-1"&gt;Spiking Neural Networks (SNN)&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="artificial-neural-networks-applied-to-the-understanding-of-biological-vision-1"&gt;Artificial neural networks applied to the understanding of biological vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="artificial-neural-networks-applied-to-the-understanding-of-biological-vision-2"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-04-10-ue-neurosciences-computationnelles/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks applied to the understanding of biological vision&lt;/a&gt;&lt;/h2&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-master-1-neurosciences-et-sciences-cognitives-1"&gt;&lt;u&gt;&lt;a href="https://ametice.univ-amu.fr/course/view.php?id=95116" target="_blank" rel="noopener"&gt;[2024-04-10]&lt;/a&gt; &lt;a href="https://sciences.univ-amu.fr/fr/formation/masters/master-neurosciences" target="_blank" rel="noopener"&gt;Master 1 Neurosciences et Sciences Cognitives.&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&lt;/li&gt;
&lt;li&gt;outline&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2024-04-17-phd-program-sparse-representations</title><link>https://laurentperrinet.github.io/slides/2024-04-17-phd-program-sparse-representations/</link><pubDate>Wed, 10 Apr 2024 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2024-04-17-phd-program-sparse-representations/</guid><description>&lt;section&gt;
&lt;h1 id="sparse-representations"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-04-17-phd-program-sparse-representations/?transition=fade" target="_blank" rel="noopener"&gt;Sparse representations&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2024-04-17-phd-program-sparse-representations/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="neuroschool-phd-program-in-neuroscience"&gt;&lt;u&gt;&lt;a href="https://neuro-marseille.org/en/training/phd-program/" target="_blank" rel="noopener"&gt;NeuroSchool PhD Program in Neuroscience&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2024-04-17"&gt;[2024-04-17]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;a href="https://github.com/laurentperrinet/2024-04_sparse-representations" target="_blank" rel="noopener"&gt;Code&lt;/a&gt; /
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;outline =
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s sparse!&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;in practice: sparse coding in a nutshell&lt;/li&gt;
&lt;li&gt;perspective: convolutional sparse coding&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;url_code = &lt;a href="https://github.com/laurentperrinet/2024-04_sparse-representations" target="_blank" rel="noopener"&gt;https://github.com/laurentperrinet/2024-04_sparse-representations&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Not only the speaker can read these notes, Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="sparse-representations-1"&gt;Sparse representations?&lt;/h2&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.vhv.rs/dpng/d/57-574294_old-man-shrugging-shoulders-meme-hd-png-download.png" alt="" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.imgflip.com/2lmff7.jpg" alt="" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Sparse coding is a technique used in signal processing and machine learning to represent data in a more concise and efficient manner. It aims to find a sparse representation of the data, which means representing the data with only a small number of non-zero coefficients or activations. In sparse coding, a set of basis functions or atoms is typically defined, and the goal is to find a linear combination of these atoms that best represents the input data. The coefficients of this linear combination are often constrained to be sparse, meaning that only a few of them are allowed to be non-zero.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg"
&gt;
&lt;!-- &lt;img src="https://3minutosdearte.com/wp-content/uploads/2016/11/Mir%C3%B3-Paisaje-catal%C3%A1n-el-cazador-1923-24-e1534625628322.jpg" width="80%"/&gt; --&gt;
&lt;aside class="notes"&gt;
Paysage catalan (Le Chasseur)
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-computer-vision"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;figure id="figure-lp-et-al-2004httpslaurentperrinetgithubiopublicationperrinet-04-tauc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-04-tauc/featured.png" alt="[[LP *et al*, 2004](https://laurentperrinet.github.io/publication/perrinet-04-tauc/)]" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-04-tauc/" target="_blank" rel="noopener"&gt;LP &lt;em&gt;et al&lt;/em&gt;, 2004&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
vision is an inverse problem
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://www.christies.com/img/LotImages/2017/CKS/2017_CKS_13486_0110_000(rene_magritte_la_corde_sensible011104).jpg"
&gt;
&lt;!-- &lt;img src="https://www.christies.com/img/LotImages/2017/CKS/2017_CKS_13486_0110_000(rene_magritte_la_corde_sensible011104).jpg" width="80%"/&gt; --&gt;
&lt;aside class="notes"&gt;
René Magritte La corde sensible (Heartstring)
&lt;/aside&gt;
&lt;hr&gt;
&lt;img src="http://www.quickmeme.com/img/e7/e762d72e778aaaf26b40f606761abbdf755b6ae39caeed70fe4abb4ce7071869.jpg" width="80%"/&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;René Magritte La corde sensible (Heartstring)&lt;/p&gt;
&lt;p&gt;Occam&amp;rsquo;s razor: &amp;ldquo;Entities should not be multiplied without necessity.&amp;rdquo;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-computer-vision-1"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;img src="https://laurentperrinet.github.io/publication/perrinet-03-ieee/v1_tiger.gif" width="60%"/&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-computer-vision-2"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;figure id="figure-lp-and-bednar-2015httpslaurentperrinetgithubiopublicationperrinet-bednar-15"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/PerrinetBednar15/raw/master/figures/figure_synthesis.svg" alt="[[LP and Bednar, 2015]](https://laurentperrinet.github.io/publication/perrinet-bednar-15/)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/perrinet-bednar-15/" target="_blank" rel="noopener"&gt;[LP and Bednar, 2015]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;extracting edges is useful&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-computer-vision-3"&gt;Sparse representations in computer vision&lt;/h2&gt;
&lt;figure id="figure-lp-2021httpslaurentperrinetgithubiosciblogposts2021-03-27-density-of-stars-on-the-surface-of-the-skyhtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/sciblog/files/2021-03-27_generative.png" alt="[[LP, 2021](https://laurentperrinet.github.io/sciblog/posts/2021-03-27-density-of-stars-on-the-surface-of-the-sky.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/sciblog/posts/2021-03-27-density-of-stars-on-the-surface-of-the-sky.html" target="_blank" rel="noopener"&gt;LP, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
an extreme case: astrophysics
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuromorphic-engineering"&gt;Sparse representations in neuromorphic engineering&lt;/h2&gt;
&lt;p&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_arm-roll.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_hand-clap.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_air-guitar.webp" width="33%"/&gt;&lt;/p&gt;
&lt;!--
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/events.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Ultimately, we get a list of events for each pixel that can be &lt;em&gt;merged&lt;/em&gt; to represent the entire image. This list of events includes pixel addresses, times of occurrence, and polarities. Note that since events are generated over time, they are naturally sorted by their time of occurrence. These events are then transmitted in &lt;em&gt;real time&lt;/em&gt; to the output bus, often via a USB3 connection.
It&amp;rsquo;s interesting to draw a parallel between this process and the optic nerve that connects our retina to the brain. In fact, the output of the retina consists of a million ganglion cells that emit action potentials, which are the only source of information transmitted by the &lt;em&gt;optic nerve&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg" target="_blank" rel="noopener"&gt;https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuromorphic-engineering-1"&gt;Sparse representations in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;kernels learned for motion detection&lt;/li&gt;
&lt;li&gt;can we force a sparse connectivity (beware that&amp;rsquo;s diferent from sparse activity)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuromorphic-engineering-2"&gt;Sparse representations in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;yes, the accuracy drops, but it&amp;rsquo;s still good enough with a 500x sparsity&lt;/li&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-brunel-2001httpsbooksgooglefrbookshlfrlridb8wodqwdtsscoifndpgpa307otsknhqrj-tszsig0wi2cq2rnmxc7fvtyjoewzedlcgredir_escyvonepageqffalse"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Brunel200Fig2.png" alt="[[Brunel, 2001](https://books.google.fr/books?hl=fr&amp;lr=&amp;id=b8woDqWdTssC&amp;oi=fnd&amp;pg=PA307&amp;ots=KNHQrJ-TsZ&amp;sig=0WI2cq2RnMXC7fVTyjOEWZEdlCg&amp;redir_esc=y#v=onepage&amp;q&amp;f=false)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://books.google.fr/books?hl=fr&amp;amp;lr=&amp;amp;id=b8woDqWdTssC&amp;amp;oi=fnd&amp;amp;pg=PA307&amp;amp;ots=KNHQrJ-TsZ&amp;amp;sig=0WI2cq2RnMXC7fVTyjOEWZEdlCg&amp;amp;redir_esc=y#v=onepage&amp;amp;q&amp;amp;f=false" target="_blank" rel="noopener"&gt;Brunel, 2001&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Phase diagrams of sparsely connected networks of excitatory and inhibitory spiking neurons&lt;/p&gt;
&lt;p&gt;healthy network = 1Hz = sparse activity (stronger in auditory, in insects, &amp;hellip;)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience-1"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience-2"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/fncir-10-00037-g001a.jpg" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience-3"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/fncir-10-00037-g001b.jpg" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-neuroscience-4"&gt;Sparse representations in neuroscience&lt;/h2&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/fncir-10-00037-g001.jpg" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
vinje et gallant
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-2"&gt;Sparse representations?&lt;/h2&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.vhv.rs/dpng/d/57-574294_old-man-shrugging-shoulders-meme-hd-png-download.png" alt="" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://memecreator.org/static/images/memes/5646953.jpg" alt="" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
in summary: Sparse representations resulting from these processes have been successfully applied in various domains such as image processing, computer vision, and audio signal processing. It has shown promise in tasks such as noise reduction, compression, feature extraction, and pattern recognition. By capturing the essential structure and characteristics of the data in a sparse representation, sparse coding can help reduce redundancy and noise, and extract meaningful features for further analysis or processing.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="sparse-representations-in-a-nutshell"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.giphy.com/26xBtPbmDlugFxUiY.webp" alt="" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;hellip;let&amp;rsquo;s delve into a computational theory of sparse coding&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;review_bib = s.content_bib(&amp;ldquo;LP&amp;rdquo;, &amp;ldquo;2015&amp;rdquo;, &amp;lsquo;&amp;ldquo;Sparse models&amp;rdquo; in &lt;a href="https://laurentperrinet.github.io/publication/cristobal-perrinet-keil-15-bicv/"&gt;Biologically Inspired Computer Vision&lt;/a&gt;&amp;rsquo;)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-1"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-lp-et-al-2004httpslaurentperrinetgithubiopublicationperrinet-04-tauc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-04-tauc/featured.png" alt="[[LP *et al*, 2004](https://laurentperrinet.github.io/publication/perrinet-04-tauc/)]" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-04-tauc/" target="_blank" rel="noopener"&gt;LP &lt;em&gt;et al&lt;/em&gt;, 2004&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-2"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-olshausen-and-field-1997httpmplabucsdedumarniigertolshaussen_1997pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Olshausen_2.png" alt="[[Olshausen and Field (1997)](http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf)]" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf" target="_blank" rel="noopener"&gt;Olshausen and Field (1997)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-3"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Generative model of image synthesis:&lt;/p&gt;
&lt;p&gt;$I[x, y] = $
&lt;span class="fragment " &gt;
$\sum_{i=1}^{K} a[i] \cdot \phi[i, x, y]$
&lt;/span&gt;
&lt;span class="fragment " &gt;
$ + \varepsilon[x, y]$
&lt;/span&gt;&lt;/p&gt;
&lt;span class="fragment " &gt;
Where $\phi$ is a dictionary of $K$ atoms, $a$ is a sparse vector of coefficients, and $\varepsilon$ is a noise term.
&lt;/span&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2015)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;generative model&lt;/p&gt;
&lt;p&gt;\phi is over-complete (else it is triviallly solved by pseudo inverse)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-4"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-olshausen-and-field-1997httpmplabucsdedumarniigertolshaussen_1997pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Olshausen_1.png" alt="[[Olshausen and Field (1997)](http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf" target="_blank" rel="noopener"&gt;Olshausen and Field (1997)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-5"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
\end{aligned}
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-6"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
&amp;amp; = - \log Pr( I | a ) - \log Pr(a) \\
\end{aligned}
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-7"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;Given an observation $I$,&lt;/p&gt;
&lt;p&gt;$$
\begin{aligned}
\mathcal{L}(a) &amp;amp; = - \log Pr( a | I ) \\
&amp;amp; = - \log Pr( I | a ) - \log Pr(a) \\
&amp;amp; = \frac{1}{2\sigma_n^2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 - \sum_{i=1}^{K} \log Pr( a[i] )
\end{aligned}
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
Probabilistic model
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-8"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;The problem is formalized as an optimization problem $a^\ast = \arg \min_a \mathcal{L}(a)$ with:&lt;/p&gt;
&lt;p&gt;$$
\mathcal{L} = \frac{1}{2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 + \lambda \cdot \sum_i ( a[i] \neq 0)
$$&lt;/p&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2015)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
spiking prior =&amp;gt; l0 pseudo norm
l0 problem is NP-complete
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-9"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;p&gt;The problem is formalized as an optimization problem $a^\ast = \arg \min_a \mathcal{L}(a)$ with:&lt;/p&gt;
&lt;p&gt;$$
\mathcal{L}(a) = \frac{1}{2} \sum_{x, y} ( I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi[i, x, y])^2 + \lambda \cdot \sum_{i=1}^{K} | a[i] |
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
exponential prior =&amp;gt; L1 norm
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-10"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-rentzeperis-et-al-2023httpslaurentperrinetgithubiopublicationrentzeperis-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/rentzeperis-23/featured.png" alt="[[Rentzeperis *et al* (2023)](https://laurentperrinet.github.io/publication/rentzeperis-23/)]" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/rentzeperis-23/" target="_blank" rel="noopener"&gt;Rentzeperis &lt;em&gt;et al&lt;/em&gt; (2023)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-11"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-olshausen-and-field-1997httpmplabucsdedumarniigertolshaussen_1997pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/Olshausen_5.png" alt="[[Olshausen and Field (1997)](http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf)]" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://mplab.ucsd.edu/~marni/Igert/Olshaussen_1997.pdf" target="_blank" rel="noopener"&gt;Olshausen and Field (1997)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Neural implementation = gradient descent&lt;/p&gt;
&lt;p&gt;LASSO = least absolute shrinkage and selection operator&lt;/p&gt;
&lt;p&gt;Orthogonal Matching Pursuit (OMP): OMP is an iterative algorithm used for sparse signal recovery. It starts with an initial sparse solution and iteratively selects the most correlated dictionary atoms with the residual signal. OMP aims to minimize the L2 norm of the residual while maintaining sparsity. It has a greedy nature and can provide a near-optimal sparse solution.&lt;/p&gt;
&lt;p&gt;Basis Pursuit (BP): Basis Pursuit is an optimization problem that seeks the sparsest solution to an underdetermined linear system of equations. It involves minimizing the L1 norm of the coefficient vector subject to a linear constraint. BP can be solved using linear programming techniques or convex optimization algorithms.&lt;/p&gt;
&lt;p&gt;Iterative Soft Thresholding Algorithm (ISTA): ISTA is an iterative optimization algorithm commonly used in sparse coding. It alternates between a gradient descent step and a soft thresholding step. The gradient descent step minimizes the data fidelity term, and the soft thresholding step enforces sparsity by setting small coefficients to zero. ISTA converges to a sparse solution and can be used for dictionary learning.&lt;/p&gt;
&lt;p&gt;FISTA (Fast Iterative Shrinkage-Thresholding Algorithm): FISTA is an accelerated version of ISTA that improves convergence speed. It incorporates momentum into the optimization process and achieves faster convergence rates.&lt;/p&gt;
&lt;p&gt;ADMM (Alternating Direction Method of Multipliers): ADMM is an optimization technique that decomposes the original problem into smaller subproblems and solves them iteratively. It is often used for convex optimization problems with L1 regularization. ADMM has been applied to solve sparse coding problems efficiently.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;!-- &lt;section style="text-align: left;"&gt; --&gt;
&lt;h2 id="matching-pursuit-algorithm"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : Residual $R = I$, sparse vector $a$ such that $\forall i$, $a[i] = 0$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;instead of finding the exact solution to the approximate problem, let&amp;rsquo;s solve approxiamtltly the exact one&lt;/p&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2010)&lt;/a&gt;]&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-1"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;compute $c[i] = \sum_{x, y} (R[x, y] - a[i] \cdot \phi[i, x, y])^2$&lt;/li&gt;
&lt;li&gt;Match: $i^\ast = \arg \min_i c[i]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
greedy, one by one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-2"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match : $i^\ast = \arg \max_i \sum_{x, y} R[x, y] \cdot \phi[i, x, y]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
use of correlation instead of energy
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-3"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match :
$i^\ast = \arg \max_i \sum_{x, y} ( I[x, y] \cdot \phi[i, x, y])$&lt;/li&gt;
&lt;li&gt;Assign : $a[i^\ast] = \frac{\sum_{x, y} R[x, y] \cdot \phi[i^\ast, x, y]}{\sum_{x, y} \phi[i^\ast, x, y] \cdot \phi[i^\ast, x, y]}$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
use of correlation instead of energy
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-4"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$, and normalize $\sum_{x, y} \phi[i, x, y]^2 = 1$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match : $i^\ast = \arg \max_i \sum_{x, y} R[x, y] \cdot \phi[i, x, y]$&lt;/li&gt;
&lt;li&gt;Assign : $a[i^\ast] = \sum_{x, y} R[x, y] \cdot \phi[i^\ast, x, y]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
use of correlation
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-5"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$, $\sum_{x, y} \phi[i, x, y]^2 = 1$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match : $i^\ast = \arg \max_i \sum_{x, y} R[x, y] \cdot \phi[i, x, y]$&lt;/li&gt;
&lt;li&gt;Assign : $a[i^\ast] = \sum_{x, y} R[x, y] \cdot \phi[i^\ast, x, y]$&lt;/li&gt;
&lt;li&gt;Pursuit : $R[x, y] \leftarrow R[x, y] - a[i^\ast] \cdot \phi[i^\ast, x, y]$&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
use of correlation
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-6"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Init : $R = I$, $\forall i$, $a[i] = 0$, $\sum_{x, y} \phi[i, x, y]^2 = 1$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;compute $c[i] = \sum_{x, y} R[x, y] \cdot \phi[i, x, y]$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;compute $X[i, j] = \sum_{x, y} \phi[i, x, y] \cdot \phi[j, x, y]$&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;while $\frac{1}{2} \sum_{x, y} R[x, y]^2 &amp;gt; \vartheta $, do :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Match : $i^\ast = \arg \max_i c[i]$&lt;/li&gt;
&lt;li&gt;Assign : $a[i^\ast] = c[i^\ast]$&lt;/li&gt;
&lt;li&gt;Pursuit : $c[i] \leftarrow c[i] - a[i^\ast] \cdot X[i, i^\ast] $&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-03-ieee" target="_blank" rel="noopener"&gt;LP (2004)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
use of correlation
assign th first value of the sparse vector to the winning one
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-7"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/publication/perrinet-03-ieee/v1_tiger.gif" width="60%"/&gt;
&lt;aside class="notes"&gt;
ça marche très bien!
&lt;/aside&gt;
---
## Convolutional Sparse Coding --&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2015-05-22-a-hitchhiker-guide-to-matching-pursuit/MPtutorial_rec.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Code @ &lt;a href="https://laurentperrinet.github.io/sciblog/posts/2015-05-22-a-hitchhiker-guide-to-matching-pursuit.html" target="_blank" rel="noopener"&gt;A hitchhiker guide to Matching Pursuit&lt;/a&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-8"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;p&gt;Hebbian learning (once the sparse code is known):&lt;/p&gt;
&lt;p&gt;$$
\phi_{i}[x, y] \leftarrow \phi_{i}[x, y] + \eta \cdot a[i] \cdot (I[x, y] - \sum_{i=1}^{K} a[i] \cdot \phi_{i}[x, y] )
$$&lt;/p&gt;
&lt;p&gt;[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP (2015)&lt;/a&gt;]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Unsupervised Learning of the dictionary&lt;/p&gt;
&lt;p&gt;Hebbian learning&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="matching-pursuit-algorithm-9"&gt;Matching pursuit algorithm&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/ssc.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="sparse-representations-in-a-nutshell-12"&gt;Sparse representations in a nutshell&lt;/h2&gt;
&lt;figure id="figure-lp-et-al-2004httpslaurentperrinetgithubiopublicationperrinet-04-tauc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-04-tauc/featured.png" alt="[[LP *et al*, 2004](https://laurentperrinet.github.io/publication/perrinet-04-tauc/)]" loading="lazy" data-zoomable width="55%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-04-tauc/" target="_blank" rel="noopener"&gt;LP &lt;em&gt;et al&lt;/em&gt;, 2004&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="convolutional-sparse-coding"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;this can be integrated in a hierarchy&amp;hellip;&lt;/li&gt;
&lt;li&gt;defining a Convolutional Neural Networks (CNN)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolutional-neural-nets-cnn"&gt;Convolutional Neural Nets (CNN)&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;one layer is a convolution - so let&amp;rsquo;s describe that first&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolutional-neural-nets-cnn-1"&gt;Convolutional Neural Nets (CNN)&lt;/h3&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;One-dimensional &lt;a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" target="_blank" rel="noopener"&gt;discrete convolution&lt;/a&gt; (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;and be formalized as a convolution&amp;hellip;&lt;/li&gt;
&lt;li&gt;but what is a convolution?&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s start in 1D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-1"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now in 2D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-2"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cross-correlation&lt;/strong&gt; of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;note the difference between convolutions and cross-correlation&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-3"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;it is a translation-invariant feature detector&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-4"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of an image defined on several channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{c,i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;we can add different channels to the image (eg colors)&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="convolution-mathematics-5"&gt;Convolution: Mathematics&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of a multi-channel image for multiple output channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[k, x, y] = \sum_{c,i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;now we get to the full CNN&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-the-hmax-model"&gt;CNN: the HMAX model&lt;/h3&gt;
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-challenges"&gt;CNN: challenges&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="convolutional-sparse-coding-1"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_b.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;adding a first loop of sparse coding&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-2"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2015-05-22-a-hitchhiker-guide-to-matching-pursuit/MPtutorial_rec.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Code @ &lt;a href="https://laurentperrinet.github.io/sciblog/posts/2015-05-22-a-hitchhiker-guide-to-matching-pursuit.html" target="_blank" rel="noopener"&gt;A hitchhiker guide to Matching Pursuit&lt;/a&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-3"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-lp-2015httpslaurentperrinetgithubiopublicationperrinet-15-bicv"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/perrinet-15-bicv/featured.png" alt="[[LP, 2015](https://laurentperrinet.github.io/publication/perrinet-15-bicv/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/perrinet-15-bicv/" target="_blank" rel="noopener"&gt;LP, 2015&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Code @ &lt;a href="https://nbviewer.org/github/bicv/SparseEdges/blob/master/SparseEdges.ipynb" target="_blank" rel="noopener"&gt;SparseEdges&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;good performance - depends on the size of the input image&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-4"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-ladret-et-al-2024httpslaurentperrinetgithubiopublicationladret-24-sparse"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/ladret-23-iclr/fig_dicos.png" alt="[[Ladret *et al*, 2024](https://laurentperrinet.github.io/publication/ladret-24-sparse/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/ladret-24-sparse/" target="_blank" rel="noopener"&gt;Ladret &lt;em&gt;et al&lt;/em&gt;, 2024&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;heterogeneity is important&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-5"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_c.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-sparse-coding-6"&gt;Convolutional Sparse Coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;novel challenges for CNNs&lt;/li&gt;
&lt;li&gt;1/ backpropagation is not bioplausible&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/SDPC_3.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result on MNIST&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-1"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure4a.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-2"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure4b.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;modifications= adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-3"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-predictive-processing-4"&gt;CNN: Predictive processing&lt;/h3&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/training_video_ATT.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-topography"&gt;CNN: Topography&lt;/h3&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;topography?&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="cnn-topography-1"&gt;CNN: Topography&lt;/h3&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;result= bio-mimetism&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="sparse-representations-3"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-04-17-phd-program-sparse-representations/?transition=fade" target="_blank" rel="noopener"&gt;Sparse representations&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2024-04-17-phd-program-sparse-representations/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="neuroschool-phd-program-in-neuroscience-1"&gt;&lt;u&gt;&lt;a href="https://neuro-marseille.org/en/training/phd-program/" target="_blank" rel="noopener"&gt;NeuroSchool PhD Program in Neuroscience&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2024-04-17-1"&gt;[2024-04-17]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;a href="https://github.com/laurentperrinet/2024-04_sparse-representations" target="_blank" rel="noopener"&gt;Code&lt;/a&gt; /
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;to summarize= sparse representations help understand neuroscience biological vision&lt;/li&gt;
&lt;li&gt;they have practical applications in machine learning&lt;/li&gt;
&lt;li&gt;let&amp;rsquo;s sparse!&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2024-03-27-emergences.md</title><link>https://laurentperrinet.github.io/slides/2024-03-27-emergences/</link><pubDate>Wed, 27 Mar 2024 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2024-03-27-emergences/</guid><description>&lt;section&gt;
&lt;h3 id="analyser-de-larges-volumes-de-données-neurobiologiques"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-03-27-emergences/?transition=fade" target="_blank" rel="noopener"&gt;Analyser de larges volumes de données neurobiologiques&lt;/a&gt;&lt;/h3&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-emergences-workshop-autrans-france"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2024-03-27-emergences" target="_blank" rel="noopener"&gt;[2024-03-27]&lt;/a&gt; &lt;a href="https://laurentperrinet.github.io/grant/emergences/" target="_blank" rel="noopener"&gt;Emergences workshop, Autrans, France&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;h4 id="laurentperrinetuniv-amufr"&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/h4&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;em&gt;Hello&lt;/em&gt;, can you hear me in the back? First of all, I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; the organizers for this opportunity and all of you for coming.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and I&amp;rsquo;m a computational neuroscientist interested in large-scale models of vision.&lt;/p&gt;
&lt;p&gt;Alors que ce projet vient juste de commencer, je voudrais déjà parler de quelques idées pour l&amp;rsquo;avenir. En effet, la question peut se poser quant aux applications futures des puces neuromorphiques qui vont être développées dans le cadre du projet &amp;ldquo;Emergences&amp;rdquo;. pour ce développement technologique, on va souvent penser à des applications technologiques, comme les voitures autonome ou la vision robotique. Mais il y a aussi des applications qui peuvent viser à la compréhension du fonctionnement du cerveau et de la cognition en général. Et ceci passe par une meilleure connaissance de la façon dont celle-ci est contenues dans l&amp;rsquo;activité neurale.&lt;/p&gt;
&lt;p&gt;If you wish to go further, these slides along with a number of references and useful links are available on my website.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="techniques-denregistrement-de-données-neurobiologiques"&gt;Techniques d&amp;rsquo;enregistrement de données neurobiologiques&lt;/h2&gt;
&lt;aside class="notes"&gt;
Nous allons passer en revue différentes techniques d&amp;rsquo;enregistrement de données neurobiologiques et leur évolution au cours du temps. Ensuite, j&amp;rsquo;évoquerai quelques méthodes d&amp;rsquo;analyse en donnant des exemples concrets et le lien avec les systèmes neuro morphiques.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="enregistrement-extracellulaire"&gt;Enregistrement extracellulaire&lt;/h3&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Même si ce ne sont pas les premiers à avoir enregistré l&amp;rsquo;activité électrique de neurones (ce sont physiologistes allemands Emil du Bois-Reymond et Hermann von Helmholtz au milieu du 19e siècle), David Hubel et Torsten Wiesel ont marqué leur époque. En 1962, ils ont mené des expériences révolutionnaires qui ont permis de comprendre les mécanismes de base de la perception visuelle et ont jeté les bases de la compréhension de l&amp;rsquo;organisation fonctionnelle du cortex visuel. Leur travail a valu à Hubel et Wiesel le prix Nobel de physiologie ou médecine en 1981.&lt;/p&gt;
&lt;p&gt;La technique principale utilisée par Hubel et Wiesel dans leurs expériences était la microélectrode d&amp;rsquo;enregistrement extracellulaire. Ils ont inséré de fines électrodes dans le cortex visuel primaire (aussi appelé cortex strié) de chats et de singes anesthésiés. Ces électrodes leur ont permis d&amp;rsquo;enregistrer l&amp;rsquo;activité électrique des neurones individuels lors de la présentation de stimuli visuels.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="aire-visuelle-primaire"&gt;Aire visuelle primaire&lt;/h3&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
L&amp;rsquo;aire visuelle primaire est une région du cerveau spécialisée dans le traitement des informations visuelles. Située à l&amp;rsquo;arrière du lobe occipital, elle joue un rôle clé dans la perception visuelle en analysant des caractéristiques telles que l&amp;rsquo;orientation, la couleur et la taille des stimuli. Son organisation topographique et l&amp;rsquo;activité électrique de ses neurones permettent la construction d&amp;rsquo;une représentation visuelle cohérente.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="enregistrement-extracellulaire-1"&gt;Enregistrement extracellulaire&lt;/h3&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Hubel et Wiesel ont utilisé une variété de stimuli visuels, tels que des lignes, des barres, des points lumineux et des motifs en mouvement, qu&amp;rsquo;ils ont présentés à des animaux dans des conditions contrôlées. En enregistrant les réponses des neurones visuels, ils ont pu observer des motifs caractéristiques d&amp;rsquo;activité neuronale en fonction des propriétés visuelles des stimuli.&lt;/p&gt;
&lt;p&gt;Leur travail a révélé l&amp;rsquo;existence de neurones spécifiques, appelés neurones simples et neurones complexes, qui répondent de manière sélective à des caractéristiques visuelles spécifiques, telles que l&amp;rsquo;orientation, la direction du mouvement et la taille des stimuli. Ils ont également découvert que ces neurones étaient organisés de manière hiérarchique, avec des neurones simples détectant des caractéristiques visuelles élémentaires et des neurones complexes intégrant ces informations pour former des représentations plus complexes.&lt;/p&gt;
&lt;p&gt;mais aussi: sharp electrodes, patch-clamp&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="multi-électrodes"&gt;Multi-électrodes&lt;/h3&gt;
&lt;figure id="figure-microelectrode-array-meashttpsenwikipediaorgwikimicroelectrode_array"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://medtech.citeline.com/-/media/editorial/medtech-insight/2021/12/mt2112_utah_array.jpg" alt="[[Microelectrode array (MEAs)](https://en.wikipedia.org/wiki/Microelectrode_array)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://en.wikipedia.org/wiki/Microelectrode_array" target="_blank" rel="noopener"&gt;Microelectrode array (MEAs)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;population distribué&lt;/p&gt;
&lt;p&gt;peignes, utah array = débit augment proportionnellement au nombre x freq d&amp;rsquo;echant&amp;hellip; 4,8 mégabits par seconde (100 canaux × 30 000 échantillons/seconde × 16 bits).&lt;/p&gt;
&lt;p&gt;exemple ladret chat = 100Go
exemple ladret macaque = quelques tera&lt;/p&gt;
&lt;p&gt;une aire, à plusieures aires mesoscopique (parler taille cerveau)&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="différentes-échelles"&gt;Différentes échelles&lt;/h3&gt;
&lt;figure id="figure-chemla-et-al-2017httpsdxdoiorg1011171nph43031215"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2024-03-27-emergences/scales.png" alt="[[Chemla *et al*, 2017](https://dx.doi.org/10.1117/1.NPh.4.3.031215)]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://dx.doi.org/10.1117/1.NPh.4.3.031215" target="_blank" rel="noopener"&gt;Chemla &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;imagerie: fMRI, EEG, MEG, MEEG, iEEG, &amp;hellip;&lt;/p&gt;
&lt;p&gt;big initiatives: BRAIN, HBP, Human Connectome Project, Allen Institute, Blue Brain Project, OpenWorm, OpenAI, OpenPhilanthropy, OpenCog, OpenMind&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="vers-des-données-massives"&gt;Vers des données massives&lt;/h3&gt;
&lt;figure id="figure-stevenson-and-kording-2011httpseuropepmcorgbackendptpmcrenderfcgiaccidpmc3410539blobtypepdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/talk/2024-03-27-emergences/featured.png" alt="[[Stevenson and Kording, 2011](https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3410539&amp;blobtype=pdf)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3410539&amp;amp;blobtype=pdf" target="_blank" rel="noopener"&gt;Stevenson and Kording, 2011&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Ian H Stevenson &amp;amp; Konrad P Kording
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="vers-des-données-massives-1"&gt;Vers des données massives&lt;/h3&gt;
&lt;figure id="figure-steinmetz-et-al-2017httpswwwuclacukneuropixels"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.ucl.ac.uk/neuropixels/sites/neuropixels/files/styles/medium_image/public/neuropixels_1_and_2.png" alt="[[Steinmetz *et al*, 2017](https://www.ucl.ac.uk/neuropixels/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://www.ucl.ac.uk/neuropixels/" target="_blank" rel="noopener"&gt;Steinmetz &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;neuropixel&lt;/p&gt;
&lt;p&gt;Compared to Neuropixels 1.0, the 2.0 probe has a smaller, lighter weight package, and is available in single- or four-shank versions allowing even higher density chronic recording in small animal models..&lt;/p&gt;
&lt;p&gt;The probe features 1280 low-impedance TiN recording sites densely tiled along one thin, 10 mm-long, straight shank, or 5120 electrodes divided over 4 shanks. The 384 parallel, configurable, low-noise recording channels integrated in the base enable simultaneous full band recording of hundreds of neurons.&lt;/p&gt;
&lt;p&gt;Données Priebe: utilisation de GPUs&amp;hellip; mais jusqu&amp;rsquo;à quand?&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="techniques-danalyse-des-données-neurobiologiques"&gt;Techniques d&amp;rsquo;analyse des données neurobiologiques&lt;/h2&gt;
&lt;aside class="notes"&gt;
&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="méthodes-statistiques"&gt;Méthodes statistiques&lt;/h3&gt;
&lt;figure id="figure-ladret-et-al-2023httpslaurentperrinetgithubiopublicationladret-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/ladret-23/featured.png" alt="[[Ladret *et al*, 2023](https://laurentperrinet.github.io/publication/ladret-23/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/ladret-23/" target="_blank" rel="noopener"&gt;Ladret &lt;em&gt;et al&lt;/em&gt;, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;a href="https://hugoladret.github.io/publications/ladret_et_al_variance_v1/" target="_blank" rel="noopener"&gt;https://hugoladret.github.io/publications/ladret_et_al_variance_v1/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;depuis les PAs: fréquence de tir (Adrian) donner l&amp;rsquo;exemple de Ladret
souvent pas suffisantes, c&amp;rsquo;est de la biologie
rhythmes, connectivité fonctionnelle
manifold churchland&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="méthodes-statistiques-1"&gt;Méthodes statistiques&lt;/h3&gt;
&lt;figure id="figure-ladret-et-al-2023httpslaurentperrinetgithubiopublicationladret-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://hugoladret.github.io/publications/imgs/ladret_et_al_variance_V1_2.png" alt="[[Ladret *et al*, 2023](https://laurentperrinet.github.io/publication/ladret-23/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/ladret-23/" target="_blank" rel="noopener"&gt;Ladret &lt;em&gt;et al&lt;/em&gt;, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Pour donner un peu plus de détails, nous avons conduit ce protocole, afin de comprendre comment des neurones visuel à différentes textures dans les images naturelles.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="méthodes-statistiques-2"&gt;Méthodes statistiques&lt;/h3&gt;
&lt;figure id="figure-ladret-et-al-2023httpslaurentperrinetgithubiopublicationladret-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://hugoladret.github.io/publications/imgs/ladret_et_al_variance_V1_4.png" alt="[[Ladret *et al*, 2023](https://laurentperrinet.github.io/publication/ladret-23/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/ladret-23/" target="_blank" rel="noopener"&gt;Ladret &lt;em&gt;et al&lt;/em&gt;, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Cette première analyse statistique nous a permis de caractériser la réponse de différents types de neurones, et en particulier de proposer que certains codent pour différents niveaux de précision dans l&amp;rsquo;image, ce qui est une nouveauté par rapport à la littérature.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="-et-au-delà"&gt;&amp;hellip; et au-delà!&lt;/h3&gt;
&lt;figure id="figure-churchland--cunningham-et-al-2012httpswwwthetransmitterorghow-to-teach-this-paperhow-to-teach-this-paper-neural-population-dynamics-during-reaching-by-churchland-cunningham-et-al-2012-3"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.thetransmitter.org/wp-content/uploads/2023/11/teach-a-paper.png" alt="[[Churchland &amp; Cunningham et al. (2012)](https://www.thetransmitter.org/how-to-teach-this-paper/how-to-teach-this-paper-neural-population-dynamics-during-reaching-by-churchland-cunningham-et-al-2012-3/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://www.thetransmitter.org/how-to-teach-this-paper/how-to-teach-this-paper-neural-population-dynamics-during-reaching-by-churchland-cunningham-et-al-2012-3/" target="_blank" rel="noopener"&gt;Churchland &amp;amp; Cunningham et al. (2012)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
dans tous ces types d&amp;rsquo;enregistrement avec plusieurs neurones simultanés, on observe une réponse de population et on doit donc inventer de nouvelles techniques pour analyser ses données.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="-et-au-delà-le-décodage"&gt;&amp;hellip; et au-delà: le décodage&lt;/h3&gt;
&lt;figure id="figure-ladret-et-al-2023httpslaurentperrinetgithubiopublicationladret-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://hugoladret.github.io/publications/imgs/ladret_et_al_variance_V1_6.png" alt="[[Ladret *et al*, 2023](https://laurentperrinet.github.io/publication/ladret-23/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/ladret-23/" target="_blank" rel="noopener"&gt;Ladret &lt;em&gt;et al&lt;/em&gt;, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Une autre méthode consiste à utiliser un procédé de décodage qui va appliquer un modèle d&amp;rsquo;apprentissage machine sur l&amp;rsquo;ensemble des données. Ici, nous avons utilisé une simple régression logistique. Première incursion dans le machine learning.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="-et-au-delà-le-décodage-1"&gt;&amp;hellip; et au-delà: le décodage&lt;/h3&gt;
&lt;figure id="figure-ladret-et-al-2023httpslaurentperrinetgithubiopublicationladret-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://hugoladret.github.io/publications/imgs/ladret_et_al_variance_V1_7.png" alt="[[Ladret *et al*, 2023](https://laurentperrinet.github.io/publication/ladret-23/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/ladret-23/" target="_blank" rel="noopener"&gt;Ladret &lt;em&gt;et al&lt;/em&gt;, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The next question was: what exactly do these different neurons do? To figure this out, we used a method called neural decoding, which tries to guess what the neurons are “seeing” based on their responses.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="-et-au-delà-le-décodage-2"&gt;&amp;hellip; et au-delà: le décodage&lt;/h3&gt;
&lt;figure id="figure-ladret-et-al-2023httpslaurentperrinetgithubiopublicationladret-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://hugoladret.github.io/publications/imgs/ladret_et_al_variance_V1_8.png" alt="[[Ladret *et al*, 2023](https://laurentperrinet.github.io/publication/ladret-23/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/ladret-23/" target="_blank" rel="noopener"&gt;Ladret &lt;em&gt;et al&lt;/em&gt;, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
explicabilité des coefficients
ICA, SVM auto-encoder Gallant
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="brain-computer-interface-bci"&gt;Brain-Computer Interface (BCI)&lt;/h3&gt;
&lt;figure id="figure-interface-neuronale-directe-bcihttpsfrwikipediaorgwikiinterface_neuronale_directe"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/fe/InterfaceNeuronaleDirecte-fr.svg/2560px-InterfaceNeuronaleDirecte-fr.svg.png" alt="[[Interface neuronale directe (BCI)](https://fr.wikipedia.org/wiki/Interface_neuronale_directe)]" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://fr.wikipedia.org/wiki/Interface_neuronale_directe" target="_blank" rel="noopener"&gt;Interface neuronale directe (BCI)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;potentiels évoqués&lt;/p&gt;
&lt;p&gt;motifs / récemment detec vagues&lt;/p&gt;
&lt;p&gt;causal par rapport à ce que fait l&amp;rsquo;activité (?)&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="perspectives-et-opportunités-du-neuromorphique"&gt;Perspectives et opportunités du neuromorphique&lt;/h2&gt;
&lt;aside class="notes"&gt;
&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="exploitation-dun-timing-précis"&gt;Exploitation d&amp;rsquo;un timing précis&lt;/h3&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="exploitation-dun-timing-précis-1"&gt;Exploitation d&amp;rsquo;un timing précis&lt;/h3&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="exploitation-dun-timing-précis-2"&gt;Exploitation d&amp;rsquo;un timing précis&lt;/h3&gt;
&lt;figure id="figure-kremkow-et-al-2016httpslaurentperrinetgithubiopublicationkremkow-16"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/kremkow-16/featured.png" alt="[[Kremkow *et al*, 2016](https://laurentperrinet.github.io/publication/kremkow-16/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/kremkow-16/" target="_blank" rel="noopener"&gt;Kremkow &lt;em&gt;et al&lt;/em&gt;, 2016&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="exploitation-dun-timing-précis-3"&gt;Exploitation d&amp;rsquo;un timing précis&lt;/h3&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
mainen et sejnowski
diesmann
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="codage-par-latence"&gt;Codage par latence&lt;/h3&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
thorpe
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="codage-par-latence-1"&gt;Codage par latence&lt;/h3&gt;
&lt;figure id="figure-thorpe-2001httpslaurentperrinetgithubio2022-01-12_neurocercle21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/scheme_thorpe.jpg" alt="[[Thorpe (2001)]](https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1" target="_blank" rel="noopener"&gt;[Thorpe (2001)]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the process of categorizing an object takes 10 layers&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="latences-et-rapidité"&gt;Latences et rapidité&lt;/h3&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="algorithmes-neuromorphiques"&gt;Algorithmes neuromorphiques&lt;/h2&gt;
&lt;aside class="notes"&gt;
&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="always-on-classification-using-hots"&gt;Always-on classification using HOTS&lt;/h3&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/hots.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
always-on
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="always-on-classification-using-hots-1"&gt;Always-on classification using HOTS&lt;/h3&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_offline.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
always-on
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="always-on-classification-using-hots-2"&gt;Always-on classification using HOTS&lt;/h3&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_online.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
always-on
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-in-vision"&gt;Spiking motifs in vision&lt;/h3&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
thorpe
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-in-vision-1"&gt;Spiking motifs in vision&lt;/h3&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
thorpe
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-in-vision-2"&gt;Spiking motifs in vision&lt;/h3&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://raw.githubusercontent.com/laurentperrinet/figures/7f382a8074552de1a6a0c5728c60d48788b5a9f8/animated_neurons/conv_HDSNN.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
thorpe
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-in-vision-3"&gt;Spiking motifs in vision&lt;/h3&gt;
&lt;p&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
thorpe
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-in-vision-4"&gt;Spiking motifs in vision&lt;/h3&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_raw.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
thorpe
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-in-vision-5"&gt;Spiking motifs in vision&lt;/h3&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_shortening.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
thorpe
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-in-vision-6"&gt;Spiking motifs in vision&lt;/h3&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
thorpe
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-pour-la-bio-hd-snn"&gt;Spiking motifs pour la bio (HD-SNN)&lt;/h3&gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-a_k.svg" width="42%"&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-b.svg" width="42%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-c.svg" width="42%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-a.svg" width="42%"&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
spiking motifs
&lt;/aside&gt;
&lt;hr&gt;
&lt;h3 id="spiking-motifs-pour-la-bio-hd-snn-1"&gt;Spiking motifs pour la bio (HD-SNN)&lt;/h3&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_SMs.svg" width="31%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_pre.svg" width="31%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_SM_time.svg" width="31%"&gt;
&lt;/span&gt;
&lt;p&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-23-icann/" target="_blank" rel="noopener"&gt;LP (2023)&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
This was a toy example and let&amp;rsquo;s now quantify the performance of this method in real scale settings by measuring the accuracy of finding the right SM at the right time. For this we will compare our method to a classical approach using the correlation.
First, by increasing the number of motifs, we show that the accuracy of our method (in blue) is very high and outperforms the cross-correlation method (red), in particular as the number of SMs increases. The same trend is shown also when the number of presynaptic inputs increases from a low to a high dimension. Finally, the number of possible delays is a crucial parameter and enough heterogenous delays are necessary to reach a good performance.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="future-steps"&gt;Future steps&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;ul&gt;
&lt;li&gt;unsupervised&lt;/li&gt;
&lt;/ul&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;ul&gt;
&lt;li&gt;high-throughput&lt;/li&gt;
&lt;/ul&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;ul&gt;
&lt;li&gt;real-time&lt;/li&gt;
&lt;/ul&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;!--
---
### unsupervised
&lt;aside class="notes"&gt;
unsupervised / contrastive learning
&lt;/aside&gt;
---
### high-throughput
&lt;aside class="notes"&gt;
puces neuromorphiques, spike sorting on electrode
&lt;/aside&gt;
---
### real-time using neuromorphic hardware
&lt;figure id="figure-loihi-2"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg" alt="Loihi 2" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Loihi 2
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
énergie (heat) +
rapidité +
anticpation (PP)
&lt;/aside&gt; --&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h3 id="analyser-de-larges-volumes-de-données-neurobiologiques-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-03-27-emergences/?transition=fade" target="_blank" rel="noopener"&gt;Analyser de larges volumes de données neurobiologiques&lt;/a&gt;&lt;/h3&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-emergences-workshop-autrans-france-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2024-03-27-emergences" target="_blank" rel="noopener"&gt;[2024-03-27]&lt;/a&gt; &lt;a href="https://laurentperrinet.github.io/grant/emergences/" target="_blank" rel="noopener"&gt;Emergences workshop, Autrans, France&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;h4 id="laurentperrinetuniv-amufr-1"&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/h4&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;En conclusion, &amp;hellip;&lt;/p&gt;
&lt;p&gt;&amp;hellip; in coopearation with robotics&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2024-02-05-udem.md</title><link>https://laurentperrinet.github.io/slides/2024-02-05-udem/</link><pubDate>Mon, 05 Feb 2024 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2024-02-05-udem/</guid><description>&lt;section&gt;
&lt;h3 id="neuromorphic-models-of-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-02-05-udem/?transition=fade" target="_blank" rel="noopener"&gt;Neuromorphic models of vision&lt;/a&gt;&lt;/h3&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-seminar-at-udems-school-of-optometry-montréal"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-12-01-biocomp" target="_blank" rel="noopener"&gt;[2024-02-05]&lt;/a&gt; &lt;a href="https://opto.umontreal.ca/ecole/english/" target="_blank" rel="noopener"&gt;Seminar at UdeM’s School of Optometry, Montréal&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;h4 id="laurentperrinetuniv-amufr"&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/h4&gt;
&lt;aside class="notes"&gt;
&lt;h2 id="when-brains-meet-computing-machines"&gt;When brains meet computing machines&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Hello&lt;/em&gt;, can you hear me in the back? First of all, I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; the organizers for this opportunity and all of you for coming.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and I&amp;rsquo;m a computational neuroscientist interested in large-scale models of vision. During this seminar for the &amp;ldquo;groupe de recherche de la vision de l&amp;rsquo;UdeM&amp;rdquo;, I&amp;rsquo;ll focus on neuromorphic models by introducing you to &lt;em&gt;event-driven cameras&lt;/em&gt;, a new technology in the field of imaging, and the impact of this technology on our understanding of vision. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: first, I will explain the concept of an event-driven camera, especially in comparison to a traditional frame-based camera. Then we&amp;rsquo;ll explore some applications of these cameras using specific algorithms. Finally, we&amp;rsquo;ll look at how our understanding of neuroscience can improve these algorithms.&lt;/p&gt;
&lt;p&gt;Relax, these slides along with a number of references and useful links are available on my website.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="sensing-light"&gt;Sensing light&lt;/h1&gt;
&lt;aside class="notes"&gt;
The primary goal of &lt;em&gt;imaging technologies&lt;/em&gt; is to represent a visual signal, i.e. the intensity and color of light as it is distributed across the visual field, in order to create a realistic representation of a visual scene. Let&amp;rsquo;s look at an example.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="http://lepassetempsderose.l.e.pic.centerblog.net/fddea7fb.gif"
&gt;
&lt;aside class="notes"&gt;
For example, this galloping horse makes us feel like we&amp;rsquo;re seeing this real scene right in front of us. This imaging technique, made possible by the chain of pre-processing from my computer to the projector, appears to move smoothly, but it&amp;rsquo;s actually an &lt;em&gt;illusion&lt;/em&gt; called apparent motion. This is what happens when still images are shown one after another, very quickly, making it appear as if the scene is moving all the time: Our brains interpret these separate images as a single, unified moving scene. This technique is the basis of motion pictures and animation, where frames are displayed quickly enough to create the &lt;em&gt;illusion&lt;/em&gt; of continuous motion. Lowering the frame rate reveals this illusion&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif"
&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&amp;hellip; and this example demonstrates that since numerous years imaging techniques have also opened the door to new scientific discoveries. For example, in the late 19th century, scientists wondered if horses lifted all four hooves off the ground when they galloped. It was too fast for the human eye to see. Eadweard Muybridge solved this mystery using &lt;em&gt;chronophotography&lt;/em&gt;, an early form of photography that captures motion. He took a series of photographs of a horse running and showed that there are moments when all four hooves are in the air. This breakthrough helped us better understand animal movement and paved the way for modern cameras.&lt;/p&gt;
&lt;p&gt;This technique is inspired by the research of [Etienne-Jules &lt;em&gt;Marey&lt;/em&gt;] (&lt;a href="https://en.wikipedia.org/wiki/Etienne-Jules_Marey%29" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Etienne-Jules_Marey)&lt;/a&gt;, under the term &lt;em&gt;chronophotography&lt;/em&gt;, which is the use of a rifle-like apparatus to photograph a visual scene. This technique allowed Muybridge, in particular, to scientifically demonstrate the mechanism of a horse&amp;rsquo;s gallop. The movie theater became popular only afterwards.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://4.bp.blogspot.com/-AHprBxkfu5o/UJ-lqR7GsmI/AAAAAAAAHpo/VJzY7HMuXe0/s1600/The&amp;#43;Horse&amp;#43;in&amp;#43;Motion,&amp;#43;1878.%C2%A0Eadweard&amp;#43;Muybridge&amp;#43;%28b.&amp;#43;9&amp;#43;April,&amp;#43;1830%29The&amp;#43;first&amp;#43;movie&amp;#43;ever&amp;#43;made,&amp;#43;from&amp;#43;still&amp;#43;photographs..gif" target="_blank" rel="noopener"&gt;http://4.bp.blogspot.com/-AHprBxkfu5o/UJ-lqR7GsmI/AAAAAAAAHpo/VJzY7HMuXe0/s1600/The+Horse+in+Motion,+1878.%C2%A0Eadweard+Muybridge+(b.+9+April,+1830)The+first+movie+ever+made,+from+still+photographs..gif&lt;/a&gt;
&lt;a href="https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif" target="_blank" rel="noopener"&gt;https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://hackaday.com/wp-content/uploads/2018/04/saccades.gif"
&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;The use of such dynamic &lt;em&gt;visualization&lt;/em&gt; is crucial in the scientific field, whether in biology or physics, as it allows us to quantify the characteristics of the experiment being conducted, and this is certainly one of the reasons for your presence and an important aspect of your daily work. In the laboratory, for example, we use it in particular to quantify &lt;em&gt;eye movements&lt;/em&gt; when a stimulus is presented to an observer.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://hackaday.com/wp-content/uploads/2018/04/saccades.gif?w=600&amp;amp;h=600" target="_blank" rel="noopener"&gt;https://hackaday.com/wp-content/uploads/2018/04/saccades.gif?w=600&amp;h=600&lt;/a&gt;
&lt;a href="http://38.media.tumblr.com/831aada3328557146e214efe1cb867a5/tumblr_mslrotKPS01snyrdto1_500.gif" target="_blank" rel="noopener"&gt;http://38.media.tumblr.com/831aada3328557146e214efe1cb867a5/tumblr_mslrotKPS01snyrdto1_500.gif&lt;/a&gt;
&lt;a href="https://www.filmsranked.com/wp-content/uploads/2020/05/two-fencers.gif%22" target="_blank" rel="noopener"&gt;https://www.filmsranked.com/wp-content/uploads/2020/05/two-fencers.gif"&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="representing-light"&gt;Representing light&lt;/h4&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://1.bp.blogspot.com/-odG4Twu0Blc/UrN3ytufKnI/AAAAAAAACRM/dzJNcpV4JfY/s1600/Monty&amp;#43;Python%27s&amp;#43;1.gif" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/movie.gif" alt="" loading="lazy" data-zoomable width="66%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
To better understand the mechanism behind this technology, let&amp;rsquo;s take a sample video.
Here, I&amp;rsquo;ve taken a grayscale &lt;em&gt;video&lt;/em&gt; from an episode from the Monty Python Flying Circus TV series.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="representing-light-1"&gt;Representing light&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/analog_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&amp;hellip; and we will focus on a &lt;em&gt;single pixel&lt;/em&gt; in the space of the visual field
In this way, we can represent the evolution of the &lt;em&gt;log intensity&lt;/em&gt; of the light signal as a function of time.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="frame-based-camera-temporal-discretization"&gt;Frame-Based Camera: Temporal discretization&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/frame-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
From this representation, expressed in continuous time, we can &lt;em&gt;discretize&lt;/em&gt; time and measure the log intensity at regular time intervals. The difference between two images gives the &lt;em&gt;temporal resolution&lt;/em&gt;, and its inverse gives the number of images per second. This is the representation classically used in chronophotography, but also in all conventional video stream &lt;em&gt;acquisition and viewing&lt;/em&gt; technologies.
This technology is highly efficient for a wide range of signals. However, it does have certain &lt;em&gt;limitations&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="frame-based-camera-temporal-aliasing"&gt;Frame-Based Camera: Temporal Aliasing&lt;/h4&gt;
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/frames.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
To illustrate a common limitation, let&amp;rsquo;s take the &lt;em&gt;example&lt;/em&gt; of three colored cubes rotating around a circle on a frontal axis. Due to the camera’s temporal resolution and the duration the shutter remains open, the captured images exhibit blur. This makes it challenging to precisely measure the cubes’ movement. As the cubes’ rotation speed increases, we might notice an effect called temporal &lt;em&gt;aliasing&lt;/em&gt;, where the movement appears distorted due to the camera’s limitations.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="frame-based-camera-wagon-wheel-illusion"&gt;Frame-Based Camera: Wagon-Wheel Illusion&lt;/h4&gt;
&lt;figure id="figure-sam-brinson-2020httpswwwsambrinsoncomnature-of-perception"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://vignette.wikia.nocookie.net/revengeristsconsortium/images/2/25/Whee.gif/revision/latest/scale-to-width-down/340?cb=20141209071330" alt="[[Sam Brinson, 2020](https://www.sambrinson.com/nature-of-perception/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://www.sambrinson.com/nature-of-perception/" target="_blank" rel="noopener"&gt;Sam Brinson, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This phenomenon is particularly striking when we look at a spinning wheel moving at high speed. Sometimes, the wheel spins so fast that in two consecutive images, it appears to rotate backwards. This optical illusion is known as the wagon-wheel illusion. It’s particularly noticeable in car wheels, where the central hub may seem stationary while the wheel itself seems to turn &lt;em&gt;counter&lt;/em&gt; to its actual direction on the road. Again this wagon-wheel effect is due to standard camera&amp;rsquo;s limitations.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-cameras"&gt;Event-Based Cameras&lt;/h1&gt;
&lt;aside class="notes"&gt;
Transitioning from conventional frame-based cameras, we now focus on the &lt;em&gt;event-based camera&lt;/em&gt;, a highly promising bio-inspired visual sensor.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-1"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;An event-based camera is equipped with a sensor that converts light into an electrical current, similar to conventional CMOS sensors. However, it differs from standard frame-based cameras in that it is inspired by the human retina. There are two main differences from a frame-based camera (middle graph) that lead to an event-based representation (right graph):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;First, each pixel of an event-based camera is &lt;em&gt;independent&lt;/em&gt;, operating without a synchronized global clock.&lt;/li&gt;
&lt;li&gt;Second, each pixel detects changes in &lt;em&gt;logarithmic light intensity&lt;/em&gt; and generates a binary event only if the change exceeds a &lt;em&gt;threshold&lt;/em&gt;. If the change is an increment - that is, the log intensity has increased - the event has positive polarity; if it&amp;rsquo;s a decrement, the event has negative polarity.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In summary, an event is generated asynchronously when a pixel-level change in brightness is detected. This results in superior temporal resolution and reduced susceptibility to motion blur, making event cameras ideal for capturing fast-moving scenes.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-dvs-gesture"&gt;Event-Based Cameras: DVS gesture&lt;/h4&gt;
&lt;p&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_arm-roll.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_hand-clap.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_air-guitar.webp" width="33%"/&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
Let&amp;rsquo;s take some examples from a classic dataset, DVS gesture. These movements are, for example, clapping hands or playing air guitar. Note that the stream of events is caused by changes in the visual scene, hiding static parts. Let&amp;rsquo;s explain how discrete events are generated in response to the luminous input that continuously evolves over time.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-2"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_0.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Our signal is analog. It consists of the evolution of the log-intensity (y axis) of a single pixel through time (x axis).
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-3"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_1.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&amp;hellip; As we follow this trajectory, we can observe that it crosses a threshold. It is at this precise moment that the pixel generates an event. In this case, the event is of positive polarity, since it corresponds to an increase.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-4"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_2.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The signal then continues its time course and crosses a threshold again, resulting in the production of a new event with positive polarity.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-5"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_5.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The log-intensity continues to increase, leading to increments, or in other words, positive polarizations.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-6"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_10.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Now the signal decreases, resulting in events with negative polarity instead of positive polarity.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-7"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_20.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Continuing this process, the simple mechanism generates a &lt;em&gt;stream&lt;/em&gt; of events for each pixel, &amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-8"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_-1.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&amp;hellip; comprising a &lt;em&gt;list&lt;/em&gt; of occurrence times and their respective polarities.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-9"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Let&amp;rsquo;s now show it applied to the whole analog signal, showing the events below the signal.
It&amp;rsquo;s worth noting that this is particularly &lt;em&gt;sparse&lt;/em&gt; compared to frame-by-frame representations: in particular, a signal with very few changes can be represented by just a few binary events. This is a very useful feature, not only because it saves &lt;em&gt;bandwidth&lt;/em&gt;, but also because it allows us to concentrate the &lt;em&gt;computations&lt;/em&gt; on the few events that represent the image. It&amp;rsquo;s also a fundamental feature of neuron function in the brain. Indeed, neurons communicate sparsely with action potentials, which can be thought of as binary events.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-10"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;p&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_arm-roll.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_hand-clap.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_air-guitar.webp" width="33%"/&gt;&lt;/p&gt;
&lt;!--
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/events.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Ultimately, we get a list of events for each pixel that can be &lt;em&gt;merged&lt;/em&gt; to represent the entire image. This list of events includes pixel addresses, times of occurrence, and polarities. Note that since events are generated over time, they are naturally sorted by their time of occurrence. These events are then transmitted in &lt;em&gt;real time&lt;/em&gt; to the output bus, often via a USB3 connection.
It&amp;rsquo;s interesting to draw a parallel between this process and the optic nerve that connects our retina to the brain. In fact, the output of the retina consists of a million ganglion cells that emit action potentials, which are the only source of information transmitted by the &lt;em&gt;optic nerve&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg" target="_blank" rel="noopener"&gt;https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-11"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sensor&lt;/th&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Framerate&lt;/th&gt;
&lt;th&gt;Resolution&lt;/th&gt;
&lt;th&gt;Power&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Human eye&lt;/td&gt;
&lt;td&gt;60 (?) dB&lt;/td&gt;
&lt;td&gt;300 (?) fps&lt;/td&gt;
&lt;td&gt;100 (?) Mpx&lt;/td&gt;
&lt;td&gt;10 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DSLR&lt;/td&gt;
&lt;td&gt;44.6 dB&lt;/td&gt;
&lt;td&gt;120 fps&lt;/td&gt;
&lt;td&gt;2&amp;ndash;20 Mpx&lt;/td&gt;
&lt;td&gt;30 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ultra-high speed&lt;/td&gt;
&lt;td&gt;64 dB&lt;/td&gt;
&lt;td&gt;10^4 fps&lt;/td&gt;
&lt;td&gt;0.3&amp;ndash;4 Mpx&lt;/td&gt;
&lt;td&gt;300 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event-based&lt;/td&gt;
&lt;td&gt;120 dB&lt;/td&gt;
&lt;td&gt;10^6 fps&lt;/td&gt;
&lt;td&gt;0.1&amp;ndash;2 Mpx&lt;/td&gt;
&lt;td&gt;30 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Event-driven cameras boast several remarkable properties.
Firstly, their &lt;em&gt;temporal precision&lt;/em&gt; is in the microsecond range, allowing for a theoretical frame rate of up to a million images per second. In contrast, a conventional camera typically captures around a hundred images per second, while a high-speed camera may reach 10,000 images per second. Estimating the sampling frequency of human perception is challenging; although 25 frames per second usually suffice for movies, the human eye can discern temporal details at rates between 300 and 1,000 frames per second.
It’s also noteworthy that the &lt;em&gt;spatial resolution&lt;/em&gt; of event cameras is generally modest, often in the megapixel range. This is not due to technical constraints but rather reflects the cameras’ common technological applications.
Compared with conventional cameras, which will consume several watts, event cameras consume very little electrical &lt;em&gt;energy&lt;/em&gt;, in the order of 10 milliwatts, a consumption equivalent to that of the human eye.
Another key feature is their ability to detect a very wide &lt;em&gt;range&lt;/em&gt; of luminosity, reaching 120 dB, which is a million times greater than conventional cameras and thousand times greater than an human eye.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Event_camera#Functional_description" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Event_camera#Functional_description&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;more in &lt;a href="https://arxiv.org/pdf/1904.08405.pdf" target="_blank" rel="noopener"&gt;https://arxiv.org/pdf/1904.08405.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-12"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
But why is detecting a very wide range of luminosity usefull ? The ability to &lt;em&gt;adapt&lt;/em&gt; to changing light conditions can be illustrated by revisiting our analog signal and its event representation. Consider, for example, an autonomous car driving in daylight and then entering and exiting a &lt;em&gt;tunnel&lt;/em&gt;. This scenario involves changes in brightness by a factor of several thousand.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="event-based-cameras-13"&gt;Event-Based Cameras&lt;/h4&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_low.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Here we have a division by a factor 8 of the signal in the middle section. It will be reported by a frame-based camera. In an event-based camera, this is represented here by a &lt;em&gt;sharp decrement&lt;/em&gt; in log intensity space and clearly indicated by events of negative polarity, but we can see that since this is a camera that uses log intensity, dividing the light signal produces the &lt;em&gt;same signal&lt;/em&gt; course over time, and therefore events that are identical. Event-driven cameras are therefore particularly well-suited to &lt;em&gt;dynamic signals&lt;/em&gt;, where the lighting context can change drastically.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-computer-vision"&gt;Event-Based Computer vision&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;These cameras therefore look very promising for future applications, particularly for embedded applications, but also for applications linked to scientific experiments. However, we can see that the image &lt;em&gt;representation&lt;/em&gt; is completely different, that is, we can no longer consider static images that follow one another at a regular rate, and for which we could have applied the algorithms that have been developed for decades in the field of &lt;em&gt;computer vision&lt;/em&gt;. We end up with a signal that corresponds to events that are transmitted as a stream from the camera. And we have to reinvent all computer vision algorithms to make them &lt;em&gt;event-driven&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;TODO: the process is active driven by the signal compared to acquired&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="always-on-object-recognition-dvs-gesture"&gt;Always-on Object Recognition: DVS gesture&lt;/h4&gt;
&lt;p&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_arm-roll.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_hand-clap.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_air-guitar.webp" width="33%"/&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
We considered the DVS gesture classification task, involving the classification of 10 different types of human gestures. These movements are, for example, clapping hands or playing air guitar. Note that the stream of events is caused by changes in the visual scene.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="always-on-object-recognition"&gt;Always-on Object Recognition&lt;/h4&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/hots.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
So how can we process and learn data coming from an event-based camera ?
My team, including PhD student Antoine Grimaldi, has enhanced an existing algorithm known as &lt;em&gt;HOTS&lt;/em&gt;. This algorithm employs a traditional convolutional and hierarchical structure to process information. It begins with the camera’s event data that are processed three stacked layers, the last layer giving a high-level representation suitable for tasks like digit recognition —for example, identifying the number eight. A key aspect of HOTS is its conversion of event data into multiplexed, parallel channels that mirror different temporal sequence of events, termed the &lt;em&gt;temporal surface&lt;/em&gt; which provides with a representation of recent activity. Each layer represents these temporal surfaces individually. Notably, the algorithm’s learning process is &lt;em&gt;unsupervised&lt;/em&gt; at every layer, marking a significant advancement over typical deep learning methods that rely on back-propagating classification errors—which is biologically implausible. Building on HOTS, we’ve improved it by incorporated neurobiological insights, particularly the principle of &lt;em&gt;homeostasis&lt;/em&gt;, to better balance the various parallel communication pathways.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="always-on-object-gesture-recognition"&gt;Always-on Object Gesture Recognition&lt;/h4&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_offline.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To demonstrate our algorithm’s effectiveness, we tested it on a standard dataset that I presented before for classifying &lt;em&gt;10 distinct human gestures&lt;/em&gt;, such as clapping, waving, or drumming. With random guessing at about 8.3%, the original HOTS algorithm achieved 70% accuracy after processing all events. However, by adding &lt;em&gt;homeostasis&lt;/em&gt; —an important concept from neuroscience— we enhanced the algorithm’s performance to 82%. Homeostasis is used to balance the firing rates accross neurons in a neural network. It ensures that all neurons contribute equally over time, avoiding dominance by a few neurons. This underscores the value of incorporating neuroscientific principles into machine learning.&lt;/p&gt;
&lt;p&gt;Furthermore, we leveraged a key trait of biological systems: the ability to process information continuously, in real time. Traditional algorithms wait to classify until all events are processed. We innovated by enabling our algorithm to classify on-the-fly, in real-time, with each incoming event. This means that as events occur, they’re instantly processed through the layers, reaching the classification layer without delay.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="always-on-object-gesture-recognition-1"&gt;Always-on Object Gesture Recognition&lt;/h4&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_online.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
What&amp;rsquo;s more interesting is that we were also able to show the &lt;em&gt;evolution&lt;/em&gt; of our algorithm’s average performance relative to the dataset and the number of processed events. The blue curve reveals that with fewer than 10 events, performance hovers at chance levels. However, as more events are processed, we observe a steady improvement. Remarkably, with 10,000 events, &lt;em&gt;performance&lt;/em&gt; matches that of the original algorithm and further excels with an additional tenfold increase in events. A major advantage of this algorithm is that it can be asked to classify the nature of what it sees in real-time, at any point during the event stream —not just after the entire signal is processed. Online processing is essential in biology. For example, imagine you&amp;rsquo;re on the savannah and a &lt;em&gt;lion&lt;/em&gt; jumps out at you. You won&amp;rsquo;t have the time to wait for the video sequence to finish processing before making the right decision, which is to flee.
We’ve also refined our algorithm to select classification events based on precision calculations for each event. By adding a precision &lt;em&gt;threshold&lt;/em&gt;, we achieve high performance with merely a hundred events. This reflects a biological network trait where decisions aren’t made incrementally but rather emerge abruptly here after 200 events — and then continue to improve and stabilize.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h1&gt;
&lt;aside class="notes"&gt;
I have therefore illustrated the use of &lt;em&gt;event-driven&lt;/em&gt; cameras on a particular algorithm. The nice feature of this algorithm is that it processes the stream of events from the camera on an event-by-event basis rather than having to wait for the whole video sequence to finish. Each event has the potential to initiate a series of processes across various layers, allowing for the continuous update of classification values. This type of operation is characteristic of the way neurons work in the brain, that is, using an event-based representation of information processing. This is what we call &lt;em&gt;spiking neural networks&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure--gregor-lenz-tonic-manualhttpstonicreadthedocsioenlatest"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://tonic.readthedocs.io/en/latest/_images/neuron-models.png" alt="© Gregor Lenz, [[Tonic manual](https://tonic.readthedocs.io/en/latest/)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
© Gregor Lenz, [&lt;a href="https://tonic.readthedocs.io/en/latest/" target="_blank" rel="noopener"&gt;Tonic manual&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Traditional neural networks in deep learning typically rely on an analog representation. This is illustrated in this figure, where various analog inputs are integrated and then processed through a non-linear function to output an analog activation value. This basic &lt;em&gt;perceptron&lt;/em&gt; principle is at the foundation of all existing neural networks, including convolutional networks that excel in image classification. While effective for static images, this method can be resource-intensive for video processing. An alternative is the use of &lt;em&gt;spiking neurons&lt;/em&gt;. Unlike their analog counterparts, spiking neurons process discrete events, which are integrated in the membrane potential. When the membrane potential crosses a theshold, it output an action potential, which can be seen as an event.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-lif-neuron"&gt;Spiking Neural Networks: LIF Neuron&lt;/h4&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This is illustrated in this &lt;em&gt;animation&lt;/em&gt;, which shows how we can transform a list of input events by giving them different weights, and then &lt;em&gt;integrate&lt;/em&gt; them into the cell&amp;rsquo;s membrane potential. When the membrane potential crosses the spiking theshold, the neuron outputs a spike.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-neuromorphic-hardware"&gt;Spiking Neural Networks: neuromorphic hardware&lt;/h4&gt;
&lt;figure id="figure-loihi-2"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg" alt="Loihi 2" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Loihi 2
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;The introduction of spiking neural networks marks a &lt;strong&gt;paradigm shift&lt;/strong&gt; in computation, in the same way that event-driven cameras have brought a paradigm shift in image representation. These spiking neural networks have led to the creation of innovative algorithms and the development of neuromorphic chips like Intel’s Loihi 2. This chip departs from traditional computing by utilizing a massively parallel array of event-driven processing units. As with event-driven cameras, this has the dual advantage of being very fast and consuming very little &lt;strong&gt;energy&lt;/strong&gt;. The field continues to advance, with new &lt;strong&gt;neuromorphic chips&lt;/strong&gt; being developed that could potentially replace standard CPUs and GPUs.&lt;/p&gt;
&lt;figure id="figure-propheseehttpsdocspropheseeaistableconceptshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg" alt="[Prophesee](https://docs.prophesee.ai/stable/concepts.html)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://docs.prophesee.ai/stable/concepts.html" target="_blank" rel="noopener"&gt;Prophesee&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Loihi: &lt;a href="https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg" target="_blank" rel="noopener"&gt;https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg?lossy=0&amp;amp;strip=none&amp;amp;ssl=1" target="_blank" rel="noopener"&gt;https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg?lossy=0&amp;strip=none&amp;ssl=1&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-in-neurobiology"&gt;Spiking Neural Networks in neurobiology&lt;/h4&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
Spiking neural networks show great potential for processing data from event-driven cameras. However, &lt;em&gt;neurophysiology&lt;/em&gt; studies reveal some unexpected behaviors, very different from the classical perceptron. I will highlight these differences with three examples. The first example is a 1995 study by Mainen and Sejnowski examined a neuron’s reaction to repeated stimulations.
&lt;em&gt;Panel A&lt;/em&gt; at the top presents the neuron’s response to multiple stimulations with a 200 picoampere &lt;em&gt;current step&lt;/em&gt;. The membrane potential varied across trials, indicating an unpredictable response. Initially, the spikes were synchronized at the onset of stimulation, but coherence diminished over time, leading to no alignment after approximately 750 milliseconds.
In contrast, Panel B at the botton shows the neuron’s response to stimulation with &lt;em&gt;noise&lt;/em&gt;. Here, the neuron exhibited highly consistent responses across trials, with membrane potential traces nearly identical. This precision was achieved using &lt;em&gt;frozen&lt;/em&gt; noise, a repeated, unchanging stimulus. The study highlights that neurons are less responsive to constant analog values, such as square pulses, and more selective to dynamic signals, responding with remarkable precision in the temporal domain.
&lt;/aside&gt;
&lt;!--
---
#### Spiking Neural Networks in neurobiology
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-in-neurobiology-1"&gt;Spiking Neural Networks in neurobiology&lt;/h4&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this second example, I show a simulation reproducing the 1999 paper by Diesmann and colleagues. This &lt;em&gt;theoretical model&lt;/em&gt; considers ten interconnected groups, each comprising 100 neurons. Each group is connected to the next one. A key finding is that information transfer across groups depends on the &lt;strong&gt;temporal concentration&lt;/strong&gt; of spikes. Initially, information is too scattered within the first group, leading to a dilution effect in subsequent groups. However, once a threshold is reached, a cluster of synchronous spikes ensures efficient propagation through the network. This non-linear dynamic is characteristic of spiking neural networks, adding a layer of richness, but also a cerain complexity.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-in-neurobiology-2"&gt;Spiking Neural Networks in neurobiology&lt;/h4&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
A third example shows an experiment conducted by Rosa Cossart&amp;rsquo;s group at INMED and recently published by Haimerl and colleagues. They used &lt;em&gt;calcium fluorescence&lt;/em&gt; imaging to track neuronal activity in mice which at first look appears to be activated in a random sequence. By arranging the neurons in &lt;em&gt;temporal order of activation&lt;/em&gt;, it shows a repeatable, sequential activation of these neurons, a mechanism which resembles the model mentioned earlier. These patterns closely align with the mouse’s motor behavior, as depicted in the accompanying graph. Surprisingly, these activity sequences remained consistent, even when recorded on the &lt;em&gt;next day&lt;/em&gt;, underscoring the importance of temporal dynamics in neural computation.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks-spiking-motifs"&gt;Spiking Neural Networks: Spiking motifs&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
These observations have led us to &lt;em&gt;review&lt;/em&gt; neurobiological evidence of neurons encoding information based on the relative timing of spikes. Intriguingly, the conduction &lt;em&gt;delays&lt;/em&gt; observed in spike transmission are not merely obstacles. Instead, they could be used to enhance information representation and processing through &lt;em&gt;spiking motifs&lt;/em&gt;. This perspective challenges traditional views and opens up new possibilities for understanding information representation, processing and learning.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-spiking-motifs-1"&gt;Spiking Neural Networks: Spiking motifs&lt;/h4&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
Consider an ultra-simplified neural network with three presynaptic neurons and two output neurons, connected by &lt;em&gt;heterogeneous&lt;/em&gt; delays. With synchronous inputs, the output neurons activate at different times, failing to reach the threshold for an output spike. However, if the delays align, the action potentials to arrive simultaneously, the combined input can trigger an output spike at the &lt;em&gt;same instant&lt;/em&gt;, as indicated by the red bar.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-spiking-motifs-2"&gt;Spiking Neural Networks: Spiking motifs&lt;/h4&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
To better grasp this mechanism, let’s revisit the animation of a spiking neuron. Without delays, action potentials reach the neuron’s cell body immediately, where they’re integrated to potentially trigger a spike.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-spiking-motifs-3"&gt;Spiking Neural Networks: Spiking motifs&lt;/h4&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
Now using &lt;em&gt;heterogeneous&lt;/em&gt; delays, the timing of spike arrival at the cell body varies. Introducing a specific &lt;em&gt;spiking motif&lt;/em&gt;, marked by green action potentials, allows these spikes to converge simultaneously due to the delays. This synchronicity results in the neuron generating a new spike.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-hd-snn"&gt;Spiking Neural Networks: HD-SNN&lt;/h4&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
In applying this theoretical principle, we developed an algorithm to detect movement in images. We began by simulating event data from natural images set in motion along paths similar to those observed during free visual exploration. The event-driven output exhibits distinct characteristics. For instance, rapid movement results in a higher spike rate. Conversely, edges aligned with the motion direction yield minimal changes, leading to fewer spikes. This phenomenon is known as the aperture problem.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-hd-snn-1"&gt;Spiking Neural Networks: HD-SNN&lt;/h4&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://raw.githubusercontent.com/laurentperrinet/figures/7f382a8074552de1a6a0c5728c60d48788b5a9f8/animated_neurons/conv_HDSNN.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
We then used a neural network with a classical architecture, which we enhanced by using a spike-based representation that accounts for various synaptic delays values. In this figure, the input is on the left grid, indicating spikes of either positive or negative polarity. This input is processed through multiple channels, represented by green and orange, and generate membrane activity. This activity, in turn, led to the production of output spikes, particularly in synaptic connection nuclei with heterogeneous delays. These delays are key to identifying specific spatio-temporal patterns.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-hd-snn-2"&gt;Spiking Neural Networks: HD-SNN&lt;/h4&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
A key advantage of this network is its differentiability, which allows the application of traditional machine learning techniques, such as supervised learning.
We then see the emergence of various convolution kernels. The graph on the left, marked by red arrows, displays a selection of these kernels oriented in different directions.
It shows the kernels obtained on the spatial representation according to the different columns, and each row represents the different delays from a delay of one on the right to a delay of 12 time steps on the left. Detectors that follow the motion emerge. For example, for the top line from top to bottom. These kernels integrate both positive neurons in red and negative polarity inputs in blue. Such spatio-temporal filtering is observed in neurobiology, but to my knowledge had never been observed in a model of spiking neurons trained under natural conditions.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-hd-snn-3"&gt;Spiking Neural Networks: HD-SNN&lt;/h4&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_raw.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We will now study the performance of this network in detecting motion in the flow of events entering the network. When we use all the weights of the convolution kernel, we get a very good performance of the order of 99%, represented by the black dot in the top right-hand corner. Note that in the kernels we&amp;rsquo;ve seen emerge, most of the synaptic weights are close to zero, so we might consider removing some of these weights, as this can be shown to reduce the number of event calculations required.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-hd-snn-4"&gt;Spiking Neural Networks: HD-SNN&lt;/h4&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_shortening.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
This is what we&amp;rsquo;ve done, by first removing the parts of the core corresponding to the longest delays. This &amp;ldquo;shortens&amp;rdquo; the kernel. We quickly observed a degradation in performance, which reached half-saturation when we reduced the number of weights by around 50%. This demonstrates the importance of integrating information that is quite distant and structured over time.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h4 id="spiking-neural-networks-hd-snn-5"&gt;Spiking Neural Networks: HD-SNN&lt;/h4&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In a second step, we performed a pruning operation, which consists in progressively removing the weights that are the weakest. This time, performance remains optimal over a wide compression range, and we reach half-saturation when we have removed around 99.8% of the weights. This means that the network is able to maintain very good performance, even when only one weight out of 600 has been kept, and therefore, with a computation time increased by a factor of 600. This property, which we didn&amp;rsquo;t expect, seems promising for creating machine learning algorithms that are less energy-hungry.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h3 id="neuromorphic-models-of-vision-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2024-02-05-udem/?transition=fade" target="_blank" rel="noopener"&gt;Neuromorphic models of vision&lt;/a&gt;&lt;/h3&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-seminar-at-udems-school-of-optometry-montréal-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-12-01-biocomp" target="_blank" rel="noopener"&gt;[2024-02-05]&lt;/a&gt; &lt;a href="https://opto.umontreal.ca/ecole/english/" target="_blank" rel="noopener"&gt;Seminar at UdeM’s School of Optometry, Montréal&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;h4 id="laurentperrinetuniv-amufr-1"&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/h4&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;In conclusion, we have seen that event-driven cameras open the door to new applications that mimic the performance of the human eye, in terms of computational dynamics, adaptation to light conditions and energy constraints. This technological development has recently been accompanied by the development of neuromorphic chips and innovative algorithms in the form of spiking neural networks. However, there is still a great deal of progress to be made at theoretical level, particularly in the understanding of these spiking neural networks, and we have shown the potential progress that can be made by exploiting the richness of temporal representations, particularly by taking advantage of heterogeneous delays.
Beyond these particular applications to natural image processing, I hope to have succeeded in demonstrating the importance of cross-fertilizing the field of engineering applications in general with biological neuroscience. This new line of research - known as NeuroAI or, more generally, as computational neuroscience - is likely to develop over the next few years. Thank you for your attention.&lt;/p&gt;
&lt;p&gt;To conclude, we&amp;rsquo;ve explored how event-driven cameras pave the way for new applications. These applications mirror the human eye&amp;rsquo;s performance in terms of computational dynamics, rapid light condition adaptation, and energy efficiency. This tech advancement is complemented by the emergence of neuromorphic chips and innovative algorithms, specifically spiking neural networks. These networks emulate biological neurons, which communicate through binary events known as spikes rather than analog values used in traditionnal neural networks.&lt;/p&gt;
&lt;p&gt;Despite these advancements, there&amp;rsquo;s still much to learn, especially in understanding how spiking neural networks process information. I hope I&amp;rsquo;ve successfully highlighted the importance of integrating engineering applications with neuroscience. This emerging research area, known as NeuroAI or computational neuroscience, is evolving rapidly. The ultimate aim of NeuroAI is to emulate the brain’s performance: it’s like having the computational power of a supercomputer compacted into the size of a soccer ball, using only around 20W of power, which is comparable to the energy consumption of a light bulb.
This emerging research area, known as NeuroAI or computational neuroscience, is set&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2023-12-14-jraf.md</title><link>https://laurentperrinet.github.io/slides/2023-12-14-jraf/</link><pubDate>Thu, 14 Dec 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-12-14-jraf/</guid><description>&lt;section&gt;
&lt;h1 id="event-based-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-12-14-jraf/?transition=fade" target="_blank" rel="noopener"&gt;Event-based vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="adrien-fois--laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Adrien Fois &amp;amp; Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-journées-sur-l"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-12-14-jraf" target="_blank" rel="noopener"&gt;[2023-12-14]&lt;/a&gt; &lt;a href="https://jraf-2023.sciencesconf.org/" target="_blank" rel="noopener"&gt;Journées sur l&amp;rsquo;apprentissage frugal (JRAF) &lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;p&gt;&lt;a href="mailto:adrien.fois@univ-amu.fr"&gt;adrien.fois@univ-amu.fr&lt;/a&gt;
&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;em&gt;Hello&lt;/em&gt;, can you hear me in the back?&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m Adrien Fois from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit. I&amp;rsquo;m a post-doctoral researcher under the supervision of Laurent Perrinet, and during this seminar, I&amp;rsquo;ll be presenting &lt;em&gt;event-driven cameras&lt;/em&gt;. This innovative imaging technology and its influence on our understanding of vision will be our focus. I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; organizers for this opportunity, and all of you for coming. You can find these slides and related references on Laurent Perrinet’s website. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: initially, I will explain the concept of an event-driven camera, especially in comparison to a traditional frame-based camera. Following that, we’ll explore some applications of these cameras using specific algorithms. Lastly, we’ll delve into how our understanding of neuroscience can enhance these algorithms.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="sensing-light"&gt;Sensing light&lt;/h1&gt;
&lt;aside class="notes"&gt;
First of all, the objective of &lt;em&gt;imaging&lt;/em&gt; is to represent a visual signal, which includes luminous intensity and color, distributed over the visual field to create a realistic representation of a visual scene.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="http://lepassetempsderose.l.e.pic.centerblog.net/fddea7fb.gif"
&gt;
&lt;aside class="notes"&gt;
Imaging gives us the feeling that we’re seeing a scene right in front of us. For example, this galloping horse seems to move smoothly, but it’s actually an &lt;em&gt;illusion&lt;/em&gt; called apparent motion. This happens when still images are shown one after another, very quickly, making it look like the scene is moving. Our brains interpret these separate images as a single, moving scene. This technique is the foundation of motion pictures and animation, where frames are displayed quickly enough to give the &lt;em&gt;illusion&lt;/em&gt; of fluid motion.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif"
&gt;
&lt;aside class="notes"&gt;
Imaging techniques have also opened doors to new scientific discoveries. For example, back in the late 19th century, scientists wondered if horses lifted all four hooves off the ground when they galloped. It was too fast for our eyes to see. Eadweard Muybridge solved this puzzle using &lt;em&gt;chronophotography&lt;/em&gt;, an early form of photography that captures movement. He took a series of photos of a running horse and showed that, yes, there are moments when all four hooves are in the air. This breakthrough helped us understand animal movement better and paved the way for modern cameras.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="representing-spatio-temporal-luminous-information"&gt;Representing spatio-temporal luminous information&lt;/h2&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://1.bp.blogspot.com/-odG4Twu0Blc/UrN3ytufKnI/AAAAAAAACRM/dzJNcpV4JfY/s1600/Monty&amp;#43;Python%27s&amp;#43;1.gif" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/movie.gif" alt="" loading="lazy" data-zoomable width="66%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
To better understand the mechanism behind this technology, let&amp;rsquo;s take a sample video.
Here, I&amp;rsquo;ve taken a grayscale &lt;em&gt;video&lt;/em&gt; from an episode from the Monty Python Flying Circus TV series.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="representing-spatio-temporal-luminous-information-1"&gt;Representing spatio-temporal luminous information&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/analog_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&amp;hellip; and we will focus on a &lt;em&gt;single pixel&lt;/em&gt; in the space of the visual field
In this way, we can represent the evolution of the &lt;em&gt;log intensity&lt;/em&gt; of the light signal as a function of time.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://4.bp.blogspot.com/-AHprBxkfu5o/UJ-lqR7GsmI/AAAAAAAAHpo/VJzY7HMuXe0/s1600/The&amp;#43;Horse&amp;#43;in&amp;#43;Motion,&amp;#43;1878.%C2%A0Eadweard&amp;#43;Muybridge&amp;#43;%28b.&amp;#43;9&amp;#43;April,&amp;#43;1830%29The&amp;#43;first&amp;#43;movie&amp;#43;ever&amp;#43;made,&amp;#43;from&amp;#43;still&amp;#43;photographs..gif" target="_blank" rel="noopener"&gt;http://4.bp.blogspot.com/-AHprBxkfu5o/UJ-lqR7GsmI/AAAAAAAAHpo/VJzY7HMuXe0/s1600/The+Horse+in+Motion,+1878.%C2%A0Eadweard+Muybridge+(b.+9+April,+1830)The+first+movie+ever+made,+from+still+photographs..gif&lt;/a&gt;
&lt;a href="https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif" target="_blank" rel="noopener"&gt;https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif&lt;/a&gt;
&lt;a href="https://hackaday.com/wp-content/uploads/2018/04/saccades.gif?w=600&amp;amp;h=600" target="_blank" rel="noopener"&gt;https://hackaday.com/wp-content/uploads/2018/04/saccades.gif?w=600&amp;h=600&lt;/a&gt;
&lt;a href="http://38.media.tumblr.com/831aada3328557146e214efe1cb867a5/tumblr_mslrotKPS01snyrdto1_500.gif" target="_blank" rel="noopener"&gt;http://38.media.tumblr.com/831aada3328557146e214efe1cb867a5/tumblr_mslrotKPS01snyrdto1_500.gif&lt;/a&gt;
&lt;a href="https://www.filmsranked.com/wp-content/uploads/2020/05/two-fencers.gif%22" target="_blank" rel="noopener"&gt;https://www.filmsranked.com/wp-content/uploads/2020/05/two-fencers.gif"&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-temporal-discretization"&gt;Frame-Based Camera: Temporal discretization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/frame-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
From this representation, expressed in continuous time, we can &lt;em&gt;discretize&lt;/em&gt; time and measure the log intensity at regular time intervals. The difference between two images gives the &lt;em&gt;temporal resolution&lt;/em&gt;, and its inverse gives the number of images per second. This is the representation classically used in chronophotography, but also in all conventional video stream &lt;em&gt;acquisition and viewing&lt;/em&gt; technologies.
This technology is highly efficient for a wide range of signals. However, it does have certain &lt;em&gt;limitations&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-aliasing"&gt;Frame-Based Camera: Aliasing&lt;/h2&gt;
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/frames.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="85%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
To illustrate a common limitation, let&amp;rsquo;s take the &lt;em&gt;example&lt;/em&gt; of three colored cubes rotating around a circle on a frontal axis. Due to the camera’s temporal resolution and the duration the shutter remains open, the captured images exhibit blur. This makes it challenging to precisely measure the cubes’ movement. As the cubes’ rotation speed increases, we might notice an effect called temporal &lt;em&gt;aliasing&lt;/em&gt;, where the movement appears distorted due to the camera’s limitations.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-wagon-wheel-illusion"&gt;Frame-Based Camera: Wagon-Wheel Illusion&lt;/h2&gt;
&lt;figure id="figure-sam-brinson-2020httpswwwsambrinsoncomnature-of-perception"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://vignette.wikia.nocookie.net/revengeristsconsortium/images/2/25/Whee.gif/revision/latest/scale-to-width-down/340?cb=20141209071330" alt="[[Sam Brinson, 2020](https://www.sambrinson.com/nature-of-perception/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://www.sambrinson.com/nature-of-perception/" target="_blank" rel="noopener"&gt;Sam Brinson, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This phenomenon is particularly striking when we look at a spinning wheel moving at high speed. Sometimes, the wheel spins so fast that in two consecutive images, it appears to rotate backwards. This optical illusion is known as the wagon-wheel illusion. It’s particularly noticeable in car wheels, where the central hub may seem stationary while the wheel itself seems to turn &lt;em&gt;counter&lt;/em&gt; to its actual direction on the road. Again this wagon-wheel effect is due to standard camera&amp;rsquo;s limitations.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-camera"&gt;Event-Based Camera&lt;/h1&gt;
&lt;aside class="notes"&gt;
Transitioning from conventional frame-based cameras, we now focus on the &lt;em&gt;event camera&lt;/em&gt;, a highly promising bio-inspired visual sensor.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-1"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;An event-based camera is equipped with a sensor that, much like common CMOS sensors, converts light into electrical current. Yet, it stands apart from standard frame-based cameras by taking inspiration from the human retina. There are two main differences with respect to frame-based camera:
Firstly each pixel of an event-based camera is &lt;em&gt;independent&lt;/em&gt;, functioning without a synchonized global clock.
Secondly, each pixel detect shifts in &lt;em&gt;logarithmic light intensity&lt;/em&gt;, generating an binary event only when the change exceeds a &lt;em&gt;threshold&lt;/em&gt;. If the change is an increment - meaning the log intensity increased - the event has positive polarity; if it&amp;rsquo;s a decrement, the event has negative polarity.&lt;/p&gt;
&lt;p&gt;In summary, an event is asynchronously generated when a pixel-level change in brightness is detected. This leads to a superior temporal resolution and a reduced susceptibility to motion blur, making event-camera ideal for capturing fast-moving scenes. Now, let’s explain how discrete events are produced in response to an analog signal that evolves continuously over time.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-2"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_0.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Our signal is analog. It consists of the evolution of the log-intensity (y axis) of a single pixel through time (x axis).
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-3"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_1.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&amp;hellip; And we can observe that it crosses a threshold. At this precise time, the pixel generates an event. In this case, the event is of positive polarity, as it corresponds to an increase.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-4"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_2.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Then, the signal continue its course in time and cross a threshold again, resulting in the production of a new event with positive polarity.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-5"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_5.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The log-intensity continues to increase, leading to increments, or in other words, positive polarizations.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-6"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_10.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Now the signal decreases, resulting in events with negative polarity instead of positive polarity.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-7"&gt;Event-Based Camera&lt;/h2&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_20.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Continuing this process, the simple mechanism generates a &lt;em&gt;stream&lt;/em&gt; of events for each pixel, comprising a &lt;em&gt;list&lt;/em&gt; of occurrence times and their respective polarities.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-8"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_-1.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-9"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Let&amp;rsquo;s show it now applied to the whole analog signal.
It&amp;rsquo;s worth noting in particular that, compared with frame-by-frame representations, this one is particularly &lt;em&gt;sparse&lt;/em&gt;: in particular, a signal with very few changes can be represented by just a few binary events. This is a very useful feature, not only because it saves &lt;em&gt;bandwidth&lt;/em&gt;, but also because it allows us to concentrate the &lt;em&gt;computations&lt;/em&gt; around the few events that represent the image. It&amp;rsquo;s also a fundamental feature of neuron function in the brain. Indeed neurons communicate sparsely with action potentials that can be seen as binary events.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-10"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;!--
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/events.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Ultimately, we obtain a list of events for each pixels which can be &lt;em&gt;merged&lt;/em&gt; to represent the entire image. This list of events includes pixel addresses, times of occurrence and polarities. Note that as events are generated over time, they are naturally sorted by their time of occurences. These events are then transmitted in &lt;em&gt;real-time&lt;/em&gt; to the output bus, often through a USB3 connection.
It’s interesting to draw a parallel between this process and the optic nerve, which connects our retina to the brain. In fact, the retina’s output is composed of a million ganglion cells that emit action potentials, constituting the only source of information transmitted through the &lt;em&gt;optic nerve&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg" target="_blank" rel="noopener"&gt;https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-11"&gt;Event-Based Camera&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sensor&lt;/th&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Framerate&lt;/th&gt;
&lt;th&gt;Resolution&lt;/th&gt;
&lt;th&gt;Power&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Human eye&lt;/td&gt;
&lt;td&gt;60 (?) dB&lt;/td&gt;
&lt;td&gt;300 (?) fps&lt;/td&gt;
&lt;td&gt;100 (?) Mpx&lt;/td&gt;
&lt;td&gt;10 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DSLR&lt;/td&gt;
&lt;td&gt;44.6 dB&lt;/td&gt;
&lt;td&gt;120 fps&lt;/td&gt;
&lt;td&gt;2&amp;ndash;20 Mpx&lt;/td&gt;
&lt;td&gt;30 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ultra-high speed&lt;/td&gt;
&lt;td&gt;64 dB&lt;/td&gt;
&lt;td&gt;10^4 fps&lt;/td&gt;
&lt;td&gt;0.3&amp;ndash;4 Mpx&lt;/td&gt;
&lt;td&gt;300 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event-based&lt;/td&gt;
&lt;td&gt;120 dB&lt;/td&gt;
&lt;td&gt;10^6 fps&lt;/td&gt;
&lt;td&gt;0.1&amp;ndash;2 Mpx&lt;/td&gt;
&lt;td&gt;30 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Event-driven cameras boast several remarkable properties.
Firstly, their &lt;em&gt;temporal precision&lt;/em&gt; is in the microsecond range, allowing for a theoretical frame rate of up to a million images per second. In contrast, a conventional camera typically captures around a hundred images per second, while a high-speed camera may reach 10,000 images per second. Estimating the sampling frequency of human perception is challenging; although 25 frames per second usually suffice for movies, the human eye can discern temporal details at rates between 300 and 1,000 frames per second.
It’s also noteworthy that the &lt;em&gt;spatial resolution&lt;/em&gt; of event cameras is generally modest, often in the megapixel range. This is not due to technical constraints but rather reflects the cameras’ common technological applications.
Compared with conventional cameras, which will consume several watts, event cameras consume very little electrical &lt;em&gt;energy&lt;/em&gt;, in the order of 10 milliwatts, a consumption equivalent to that of the human eye.
Another key feature is their ability to detect a very wide &lt;em&gt;range&lt;/em&gt; of luminosity, reaching 120 dB, which is a million times greater than conventional cameras and thousand times greater than an human eye.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Event_camera#Functional_description" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Event_camera#Functional_description&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;more in &lt;a href="https://arxiv.org/pdf/1904.08405.pdf" target="_blank" rel="noopener"&gt;https://arxiv.org/pdf/1904.08405.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-12"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
But why is detecting a very wide range of luminosity usefull ? The ability to &lt;em&gt;adapt&lt;/em&gt; to changing light conditions can be illustrated by revisiting our analog signal and its event representation. Consider, for example, an autonomous car driving in daylight and then entering and exiting a &lt;em&gt;tunnel&lt;/em&gt;. This scenario involves changes in brightness by a factor of several thousand.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-13"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_low.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Here we have a division by a factor 8 of the signal in the middle section. It will be reported by a frame-based camera. In an event-based camera, this is represented here by a &lt;em&gt;sharp decrement&lt;/em&gt; in log intensity space and clearly indicated by events of negative polarity, but we can see that since this is a camera that uses log intensity, dividing the light signal produces the &lt;em&gt;same signal&lt;/em&gt; course over time, and therefore events that are identical. Event-driven cameras are therefore particularly well-suited to &lt;em&gt;dynamic signals&lt;/em&gt;, where the lighting context can change drastically.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-computer-vision"&gt;Event-Based Computer vision&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;These cameras therefore look very promising for future applications, particularly for embedded applications, but also for applications linked to scientific experiments. However, we can see that the image &lt;em&gt;representation&lt;/em&gt; is completely different, that is, we can no longer consider static images that follow one another at a regular rate, and for which we could have applied the algorithms that have been developed for decades in the field of &lt;em&gt;computer vision&lt;/em&gt;. We end up with a signal that corresponds to events that are transmitted as a stream from the camera. And we have to reinvent all computer vision algorithms to make them &lt;em&gt;event-driven&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;TODO: the process is active driven by the signal compared to acquired&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-recognition-dvs-gesture"&gt;Always-on Object Recognition: DVS gesture&lt;/h2&gt;
&lt;p&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_arm-roll.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_hand-clap.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_air-guitar.webp" width="33%"/&gt;&lt;/p&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/hand_clap.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/air_guitar.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/right_hand_clockwise.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/hand_clap.gif" width="33%"/&gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/air_guitar.gif" width="33%"/&gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/right_hand_clockwise.gif" width="33%"/&gt;--&gt;
&lt;!-- !"" width="33%" &gt;}}
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/air_guitar.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/right_hand_clockwise.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
We considered a classification task using a classic camera dataset, involving the classification of 10 different types of human gestures. These movements are, for example, clapping hands or playing air guitar. Note that the stream of events is caused by changes in the visual scene.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-recognition"&gt;Always-on Object Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/hots.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
So how can we process and learn data coming from an event-based camera ?
My team, including PhD student Antoine Grimaldi, has enhanced an existing algorithm known as &lt;em&gt;HOTS&lt;/em&gt;. This algorithm employs a traditional convolutional and hierarchical structure to process information. It begins with the camera’s event data that are processed three stacked layers, the last layer giving a high-level representation suitable for tasks like digit recognition—for example, identifying the number eight. A key aspect of HOTS is its conversion of event data into multiplexed, parallel channels that mirror the temporal sequence of events, termed the &lt;em&gt;temporal surface&lt;/em&gt;. A temporal surface provides a representation of recent activity, it jumps to one on an event and then exponentially decays through time. Each layer represents these temporal surfaces individually. Notably, the algorithm’s learning process is &lt;em&gt;unsupervised&lt;/em&gt; at every layer, marking a significant advancement over typical deep learning methods that rely on back-propagating classification errors—which is biologically implausible. Building on HOTS, we’ve improved it by incorporated neurobiological insights, particularly the principle of &lt;em&gt;homeostasis&lt;/em&gt;, to better balance the various parallel communication pathways.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-gesture-recognition"&gt;Always-on Object Gesture Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_offline.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To demonstrate our algorithm’s effectiveness, we tested it on a standard dataset that I presented before for classifying &lt;em&gt;10 distinct human gestures&lt;/em&gt;, such as clapping, waving, or drumming. With random guessing at 10%, the original HOTS algorithm achieved 70% accuracy after processing all events. However, by adding &lt;em&gt;homeostasis&lt;/em&gt;—an important concept from neuroscience—we enhanced the algorithm’s performance to 82%. This underscores the value of incorporating neuroscientific principles into machine learning. Homeostasis is used to balance the firing rates accross neurons in a neural network. It ensures that all neurons contribute equally over time, avoiding dominance by a few neurons.&lt;/p&gt;
&lt;p&gt;Furthermore, we leveraged a key trait of biological systems: the ability to process information continuously, in real time. Traditional algorithms wait to classify until all events are processed. We innovated by enabling our algorithm to classify on-the-fly, in real-time, with each incoming event. This means that as events occur, they’re instantly processed through the layers, reaching the classification layer without delay.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-gesture-recognition-1"&gt;Always-on Object Gesture Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_online.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
What&amp;rsquo;s more interesting is that we were also able to show the &lt;em&gt;evolution&lt;/em&gt; of our algorithm’s average performance relative to the dataset and the number of processed events. The blue curve reveals that with fewer than 10 events, performance hovers at chance levels. However, as more events are processed, we observe a steady improvement. Remarkably, with 10,000 events, &lt;em&gt;performance&lt;/em&gt; matches that of the original algorithm and further excels with an additional tenfold increase in events. A major advantage of this algorithm is that it can be asked to classify the nature of what it sees in real-time, at any point during the event stream —not just after the entire signal is processed. Online processing is essential in biology. For example, imagine you&amp;rsquo;re on the savannah and a &lt;em&gt;lion&lt;/em&gt; jumps out at you. You won&amp;rsquo;t have the time to wait for the video sequence to finish processing before making the right decision, which is to flee.
We’ve also refined our algorithm to select classification events based on precision calculations for each event. By adding a precision &lt;em&gt;threshold&lt;/em&gt;, we achieve high performance with merely a hundred events. This reflects a biological network trait where decisions aren’t made incrementally but rather emerge abruptly here after 200 events — and then continue to improve and stabilize.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-vision-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-12-14-jraf/?transition=fade" target="_blank" rel="noopener"&gt;Event-based vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="adrien-fois--laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Adrien Fois &amp;amp; Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-journées-sur-l-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-12-14-jraf" target="_blank" rel="noopener"&gt;[2023-12-14]&lt;/a&gt; &lt;a href="https://jraf-2023.sciencesconf.org/" target="_blank" rel="noopener"&gt;Journées sur l&amp;rsquo;apprentissage frugal (JRAF) &lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;p&gt;&lt;a href="mailto:adrien.fois@univ-amu.fr"&gt;adrien.fois@univ-amu.fr&lt;/a&gt;
&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To conclude, we&amp;rsquo;ve explored how event-driven cameras pave the way for new applications. These applications mirror the human eye&amp;rsquo;s performance in terms of computational dynamics, rapid light condition adaptation, and energy efficiency. This tech advancement is complemented by the emergence of neuromorphic chips and innovative algorithms, specifically spiking neural networks. These networks emulate biological neurons, which communicate through binary events known as spikes rather than analog values used in traditionnal neural networks.&lt;/p&gt;
&lt;p&gt;Despite these advancements, there&amp;rsquo;s still much to learn, especially in understanding how spiking neural networks process information. I hope I&amp;rsquo;ve successfully highlighted the importance of integrating engineering applications with neuroscience. This emerging research area, known as NeuroAI or computational neuroscience, is evolving rapidly. The ultimate aim of NeuroAI is to emulate the brain’s performance: it’s like having the computational power of a supercomputer compacted into the size of a soccer ball, using only around 20W of power, which is comparable to the energy consumption of a light bulb.
This emerging research area, known as NeuroAI or computational neuroscience, is set to evolve in the coming years. Thank you for your attention.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h1&gt;
&lt;aside class="notes"&gt;
I have therefore illustrated the use of &lt;em&gt;event-driven&lt;/em&gt; cameras on a particular algorithm. The nice feature of this algorithm is that it processes the stream of events from the camera on an event-by-event basis rather than having to wait for the whole video sequence to finish. Each event has the potential to initiate a series of processes across various layers, allowing for the continuous update of classification values. This type of operation is characteristic of the way neurons work in the brain, that is using an event-based representation of information processing. This is what we call &lt;em&gt;spiking neural networks&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-tonic-manualhttpstonicreadthedocsioenlatest_imagesneuron-modelspng"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://tonic.readthedocs.io/en/latest/_images/neuron-models.png" alt="[[Tonic manual](https://tonic.readthedocs.io/en/latest/_images/neuron-models.png)]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://tonic.readthedocs.io/en/latest/_images/neuron-models.png" target="_blank" rel="noopener"&gt;Tonic manual&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Traditional neural networks in deep learning typically rely on an analog representation. This is illustrated in this figure, where various analog inputs are integrated and then processed through a non-linear function to output an analog activation value. This basic &lt;em&gt;perceptron&lt;/em&gt; principle is at the foundation of all existing neural networks, including convolutional networks that excel in image classification. While effective for static images, this method can be resource-intensive for video processing. An alternative is the use of &lt;em&gt;spiking neurons&lt;/em&gt;. Unlike their analog counterparts, spiking neurons process discrete events, which are integrated in the membrane potential. When the membrane potential crosses a theshold, it output an action potential, which can be seen as an event.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-lif-neuron"&gt;Spiking Neural Networks: LIF Neuron&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This is illustrated in this &lt;em&gt;animation&lt;/em&gt;, which shows how we can transform a list of input events by giving them different weights, and then &lt;em&gt;integrate&lt;/em&gt; them into the cell&amp;rsquo;s membrane potential. When the membrane potential crosses the spiking theshold, the neuron outputs a spike.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-neuromorphic-hardware"&gt;Spiking Neural Networks: neuromorphic hardware&lt;/h2&gt;
&lt;figure id="figure-loihi-2"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg" alt="Loihi 2" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Loihi 2
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;The introduction of spiking neural networks marks a &lt;em&gt;paradigm shift&lt;/em&gt; in computation, in the same way that event-driven cameras have brought a paradigm shift in image representation. These spiking neural networks have led to the creation of innovative algorithms and the development of neuromorphic chips like Intel’s Loihi 2. This chip departs from traditional computing by utilizing a massively parallel array of event-driven processing units. As with event-driven cameras, this has the dual advantage of being very fast and consuming very little energy. The field continues to advance, with new neuromorphic chips being developed that could potentially replace standard CPUs and GPUs.&lt;/p&gt;
&lt;figure id="figure-propheseehttpsdocspropheseeaistableconceptshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg" alt="[Prophesee](https://docs.prophesee.ai/stable/concepts.html)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://docs.prophesee.ai/stable/concepts.html" target="_blank" rel="noopener"&gt;Prophesee&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Loihi: &lt;a href="https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg" target="_blank" rel="noopener"&gt;https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg?lossy=0&amp;amp;strip=none&amp;amp;ssl=1" target="_blank" rel="noopener"&gt;https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg?lossy=0&amp;strip=none&amp;ssl=1&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
Spiking neural networks show great potential for processing data from event-driven cameras. However, &lt;em&gt;neurophysiology&lt;/em&gt; studies reveal some unexpected behaviors, very different from the classical perceptron. I will highlight these differences with three examples. The first example is a 1995 study by Mainen and Sejnowski examined a neuron’s reaction to repeated stimulations.
&lt;em&gt;Panel A&lt;/em&gt; at the top presents the neuron’s response to multiple stimulations with a 200 picoampere &lt;em&gt;current step&lt;/em&gt;. The membrane potential varied across trials, indicating an unpredictable response. Initially, the spikes were synchronized at the onset of stimulation, but coherence diminished over time, leading to no alignment after approximately 750 milliseconds.
In contrast, Panel B at the botton shows the neuron’s response to stimulation with &lt;em&gt;noise&lt;/em&gt;. Here, the neuron exhibited highly consistent responses across trials, with membrane potential traces nearly identical. This precision was achieved using &lt;em&gt;frozen&lt;/em&gt; noise, a repeated, unchanging stimulus. The study highlights that neurons are less responsive to constant analog values, such as square pulses, and more selective to dynamic signals, responding with remarkable precision in the temporal domain.
&lt;/aside&gt;
&lt;!--
---
## Spiking Neural Networks in neurobiology
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-1"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this second example, I show a simulation reproducing the 1999 paper by Diesmann and colleagues. This &lt;em&gt;theoretical model&lt;/em&gt; considers ten interconnected groups, each comprising 100 neurons. Each group is connected to the next one. A key finding is that information transfer across groups depends on the temporal concentration of spikes. Initially, information is too scattered within the first group, leading to a dilution effect in subsequent groups. However, once a threshold is reached, a cluster of synchronous spikes ensures efficient propagation through the network. This non-linear dynamic is characteristic of spiking neural networks, adding a layer of richness, but also a cerain complexity.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-2"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
A third example shows an experiment conducted by Rosa Cossart&amp;rsquo;s group at INMED and recently published by Haimerl and colleagues. They used &lt;em&gt;calcium fluorescence&lt;/em&gt; imaging to track neuronal activity in mice. By arranging the neurons in &lt;em&gt;temporal order of activation&lt;/em&gt;, it shows a sequential activation of these neurons, a mechanism which resembles the model mentioned earlier. These patterns closely align with the mouse’s motor behavior, as depicted in the accompanying graph. Notably, these activity sequences remained consistent, even when recorded on the &lt;em&gt;next day&lt;/em&gt;, underscoring the importance of temporal dynamics in neural computation.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks-spiking-motifs"&gt;Spiking Neural Networks: Spiking motifs&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;These observations have led us to &lt;em&gt;review&lt;/em&gt; neurobiological evidence of neurons encoding information based on the relative timing of spikes. Intriguingly, the conduction &lt;em&gt;delays&lt;/em&gt; observed in spike transmission are not merely obstacles. Instead, they could be used to enhance information representation and processing through &lt;em&gt;spiking motifs&lt;/em&gt;. This perspective challenges traditional views and opens up new possibilities for understanding information representation, processing and learning.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-1"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
Consider an ultra-simplified neural network with three presynaptic neurons and two output neurons, connected by &lt;em&gt;heterogeneous&lt;/em&gt; delays. With synchronous inputs, the output neurons activate at different times, failing to reach the threshold for an output spike. However, if the delays align the action potentials to arrive simultaneously, the combined input can trigger an output spike at the &lt;em&gt;same instant&lt;/em&gt;, as indicated by the red bar.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-2"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To better grasp this mechanism, let’s revisit the animation of a spiking neuron. Without delays, action potentials reach the neuron’s cell body immediately, where they’re integrated to potentially trigger a spike.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-3"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
Now using &lt;em&gt;heterogeneous&lt;/em&gt; delays, the timing of spike arrival at the cell body varies. Introducing a specific &lt;em&gt;spiking motif&lt;/em&gt;, marked by green action potentials, allows these spikes to converge simultaneously due to the delays. This synchronicity results in the neuron generating a new spike.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
In applying this theoretical principle, we developed an algorithm to detect movement in images. We began by simulating event data from natural images set in motion along paths similar to those observed during free visual exploration. The event-driven output exhibits distinct characteristics. For instance, rapid movement results in a higher spike rate. Conversely, edges aligned with the motion direction yield minimal changes, leading to fewer spikes. This phenomenon is known as the aperture problem.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-1"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://raw.githubusercontent.com/laurentperrinet/figures/7f382a8074552de1a6a0c5728c60d48788b5a9f8/animated_neurons/conv_HDSNN.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We then used a neural network with a classical architecture, which we enhanced by using an spike representation that accounts for various synaptic delays values. In this figure, the input is on the left grid, indicating spikes of either positive or negative polarity. This input is processed through multiple channels, represented by green and orange, and generate membrane activity. This activity, in turn, led to the production of output spikes, particularly in synaptic connection nuclei with heterogeneous delays. These delays are key to identifying specific spatio-temporal patterns.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-2"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
A key advantage of this network is its differentiability, which allows the application of traditional machine learning techniques, such as supervised learning.
We then see the emergence of various convolution kernels. The graph on the left, marked by red arrows, displays a selection of these kernels oriented in different directions.
It shows the kernels obtained on the spatial representation according to the different columns, and each row represents the different delays from a delay of one on the right to a delay of 12 time steps on the left. Detectors that follow the motion emerge. For example, for the top line from top to bottom. These kernels integrate both positive neurons in red and negative polarity inputs in blue.
vim Such spatio-temporal filtering is observed in neurobiology, but to my knowledge had never been observed in a model of spiking neurons trained under natural conditions.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-3"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_raw.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We will now study the performance of this network in detecting motion in the flow of events entering the network. When we use all the weights of the convolution kernel, we get a very good performance of the order of 99%, represented by the black dot in the top right-hand corner. Note that in the kernels we&amp;rsquo;ve seen emerge, most of the synaptic weights are close to zero, so we might consider removing some of these weights, as this can be shown to reduce the number of event calculations required.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-4"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_shortening.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
This is what we&amp;rsquo;ve done, by first removing the parts of the core corresponding to the longest delays. This &amp;ldquo;shortens&amp;rdquo; the kernel. We quickly observed a degradation in performance, which reached half-saturation when we reduced the number of weights by around 50%. This demonstrates the importance of integrating information that is quite distant and structured over time.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-5"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In a second step, we performed a pruning operation, which consists in progressively removing the weights that are the weakest. This time, performance remains optimal over a wide compression range, and we reach half-saturation when we have removed around 99.8% of the weights. This means that the network is able to maintain very good performance, even when only one weight out of 600 has been kept, and therefore, with a computation time increased by a factor of 600. This property, which we didn&amp;rsquo;t expect, seems promising for creating machine learning algorithms that are less energy-hungry.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2023-12-01-biocomp.md</title><link>https://laurentperrinet.github.io/slides/2023-12-01-biocomp/</link><pubDate>Fri, 01 Dec 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-12-01-biocomp/</guid><description>&lt;section&gt;
&lt;h1 id="event-based-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-12-01-biocomp/?transition=fade" target="_blank" rel="noopener"&gt;Event-based vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-séminaire-colloque-biocomp-2023"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-12-01-biocomp" target="_blank" rel="noopener"&gt;[2023-12-01]&lt;/a&gt; &lt;a href="http://gdr-biocomp.fr/colloque-biocomp-2023/" target="_blank" rel="noopener"&gt;Séminaire colloque BioComp 2023&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;p&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;em&gt;Hello&lt;/em&gt;, can you hear me in the back?&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and during this seminar at the BioComp 2023 colloquium, I&amp;rsquo;ll be presenting &lt;em&gt;event-driven cameras&lt;/em&gt;, a new technology in the field of imaging, and the impact of this technology on our understanding of vision. I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; organizers for this opportunity, and all of you for coming. These slides are available from my website, along with a number of references. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: first, we&amp;rsquo;ll describe what an event-driven camera is - in particular, by comparing it to a conventional camera; then, we&amp;rsquo;ll show some examples of applications of these cameras with dedicated algorithms; and finally, we&amp;rsquo;ll present how our knowledge of biological mechanisms in neuroscience can enable us to improve these algorithms.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="sensing-light"&gt;Sensing light&lt;/h1&gt;
&lt;aside class="notes"&gt;
First of all, the general aim of &lt;em&gt;imaging&lt;/em&gt; is to represent a visual signal, i.e. a luminous intensity, a color, distributed over the visual field, giving us a vivid impression of the visual scene before our eyes.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="http://lepassetempsderose.l.e.pic.centerblog.net/fddea7fb.gif"
&gt;
&lt;aside class="notes"&gt;
This is perfectly illustrated in this &lt;em&gt;galloping horse&lt;/em&gt;. We get a &lt;em&gt;vivid&lt;/em&gt; impression of movement. Thanks to a rapid sequence of still images consistent with the scene being represented. This technique clearly exploits a visual &lt;em&gt;illusion&lt;/em&gt;, because we know that at each point in the visual space, the light signal is made up of a &lt;em&gt;continuous&lt;/em&gt; stream of an analogous signal representing the energy of the photos.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif"
&gt;
&lt;aside class="notes"&gt;
This technique is inspired by the research carried out by &lt;a href="https://en.wikipedia.org/wiki/Etienne-Jules_Marey" target="_blank" rel="noopener"&gt;Etienne-Jules &lt;em&gt;Marey&lt;/em&gt;&lt;/a&gt;, under the term &lt;em&gt;chronophotography&lt;/em&gt;, litterally shooting scene with a gun-like apparatus to shoot a visual scene. It notably enabled later Muybridge to scientifically demonstrate the mechanism of a horse&amp;rsquo;s gallop.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://media.giphy.com/media/4Y8PqJGFJ21CE/giphy.gif"
&gt;
&lt;aside class="notes"&gt;
The use of such dynamic &lt;em&gt;visualization&lt;/em&gt; is crucial in the scientific field, whether in biology or physics, as it enables us to quantify the characteristics of the experiment being carried out - I&amp;rsquo;m thinking, for example, of quantifying the movements and number of bacteria in a biological assay.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="representing-spatio-temporal-luminous-information"&gt;Representing spatio-temporal luminous information&lt;/h2&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://1.bp.blogspot.com/-odG4Twu0Blc/UrN3ytufKnI/AAAAAAAACRM/dzJNcpV4JfY/s1600/Monty&amp;#43;Python%27s&amp;#43;1.gif" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/movie.gif" alt="" loading="lazy" data-zoomable width="25%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
To better understand the mechanism behind this technology, let&amp;rsquo;s take a sample video.
Here, I&amp;rsquo;ve taken a grayscale &lt;em&gt;video&lt;/em&gt; from an episode from the Monty Python Flying Circus TV series.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="representing-spatio-temporal-luminous-information-1"&gt;Representing spatio-temporal luminous information&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/analog_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&amp;hellip; and we will focus on a &lt;em&gt;single pixel&lt;/em&gt; in the space of the visual field
In this way, we can represent the evolution of the &lt;em&gt;log intensity&lt;/em&gt; of the light signal as a function of time.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://4.bp.blogspot.com/-AHprBxkfu5o/UJ-lqR7GsmI/AAAAAAAAHpo/VJzY7HMuXe0/s1600/The&amp;#43;Horse&amp;#43;in&amp;#43;Motion,&amp;#43;1878.%C2%A0Eadweard&amp;#43;Muybridge&amp;#43;%28b.&amp;#43;9&amp;#43;April,&amp;#43;1830%29The&amp;#43;first&amp;#43;movie&amp;#43;ever&amp;#43;made,&amp;#43;from&amp;#43;still&amp;#43;photographs..gif" target="_blank" rel="noopener"&gt;http://4.bp.blogspot.com/-AHprBxkfu5o/UJ-lqR7GsmI/AAAAAAAAHpo/VJzY7HMuXe0/s1600/The+Horse+in+Motion,+1878.%C2%A0Eadweard+Muybridge+(b.+9+April,+1830)The+first+movie+ever+made,+from+still+photographs..gif&lt;/a&gt;
&lt;a href="https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif" target="_blank" rel="noopener"&gt;https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif&lt;/a&gt;
&lt;a href="https://hackaday.com/wp-content/uploads/2018/04/saccades.gif?w=600&amp;amp;h=600" target="_blank" rel="noopener"&gt;https://hackaday.com/wp-content/uploads/2018/04/saccades.gif?w=600&amp;h=600&lt;/a&gt;
&lt;a href="http://38.media.tumblr.com/831aada3328557146e214efe1cb867a5/tumblr_mslrotKPS01snyrdto1_500.gif" target="_blank" rel="noopener"&gt;http://38.media.tumblr.com/831aada3328557146e214efe1cb867a5/tumblr_mslrotKPS01snyrdto1_500.gif&lt;/a&gt;
&lt;a href="https://www.filmsranked.com/wp-content/uploads/2020/05/two-fencers.gif%22" target="_blank" rel="noopener"&gt;https://www.filmsranked.com/wp-content/uploads/2020/05/two-fencers.gif"&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-temporal-discretization"&gt;Frame-Based Camera: Temporal discretization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/frame-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
From this representation, expressed in continuous time, we can &lt;em&gt;discretize&lt;/em&gt; time and measure the log intensity at regular time intervals. The difference between two images gives the &lt;em&gt;temporal resolution&lt;/em&gt;, and its inverse gives the number of images per second. This is the representation classically used in chronophotography, but also in all conventional video stream &lt;em&gt;acquisition and viewing&lt;/em&gt; technologies.
This technology is highly efficient for a wide range of signals. However, it does have certain &lt;em&gt;limitations&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-aliasing"&gt;Frame-Based Camera: Aliasing&lt;/h2&gt;
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/frames.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="85%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Let&amp;rsquo;s take the &lt;em&gt;example&lt;/em&gt; of three colored cubes rotating in a frontal axis along a circle. Because of temporal resolution and the length of time the shutter is open, the images captured at each instant can produce a certain amount of &lt;em&gt;blur&lt;/em&gt;, and movement can become increasingly difficult to estimate. If the movement of the cubes begins to accelerate, temporal &lt;em&gt;aliasing&lt;/em&gt; can be observed.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-wagon-wheel-illusion"&gt;Frame-Based Camera: Wagon-Wheel Illusion&lt;/h2&gt;
&lt;figure id="figure-sam-brinson-2020httpswwwsambrinsoncomnature-of-perception"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://vignette.wikia.nocookie.net/revengeristsconsortium/images/2/25/Whee.gif/revision/latest/scale-to-width-down/340?cb=20141209071330" alt="[[Sam Brinson, 2020](https://www.sambrinson.com/nature-of-perception/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://www.sambrinson.com/nature-of-perception/" target="_blank" rel="noopener"&gt;Sam Brinson, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This phenomenon is particularly striking when we look at a spinning &lt;em&gt;wheel&lt;/em&gt; at high speed, and this wheel&amp;rsquo;s rotational speed is such that two successive images give the illusion that the movement is in the opposite direction to the real, physical moment. It&amp;rsquo;s striking here in this car wheel, where you can perceive that the central hub appears motionless, and the wheel is perceived as turning in the &lt;em&gt;opposite direction&lt;/em&gt; to the physical rolling motion on the road.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-camera"&gt;Event-Based Camera&lt;/h1&gt;
&lt;aside class="notes"&gt;
Now let&amp;rsquo;s introduce the &lt;em&gt;event camera&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-1"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This consists of a conventional sensor which, like most CMOS-type sensors, transforms visual energy into an electric current. However, there are two fundamental differences, inspired by our knowledge of the retina, which is the sensor of vision. Firstly, each pixel of this sensor is &lt;em&gt;independent&lt;/em&gt; and is not cadenced according to a global clock. Secondly, each pixel will follow the evolution of the log intensity and signal an event when an increment or decrement exceeds a threshold. Let&amp;rsquo;s explain this mechanism in relation to our analog signal.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-2"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_0.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
First of all, the signal will evolve over time, &amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-3"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_1.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&amp;hellip; and we can see here that it may cross a &lt;em&gt;threshold&lt;/em&gt;. An event will then be produced by this pixel. Here, the &lt;em&gt;event&lt;/em&gt; is of negative polarity, as it corresponds to a decrement.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-4"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_2.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-5"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_5.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Then, the signal will continue its course in time and cross a threshold again, possibly once more, at which point a new event will be produced. Here, we&amp;rsquo;re also seeing increments, ie positive polarizations.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-6"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_10.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-7"&gt;Event-Based Camera&lt;/h2&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_20.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
And so on, this simple mechanism will produce a &lt;em&gt;stream&lt;/em&gt; of events for each pixel, this &lt;em&gt;list&lt;/em&gt; being made up of the times of occurrence and the corresponding polarities.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-8"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_-1.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-9"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Let&amp;rsquo;s show it now applied to the whole analog signal.
It&amp;rsquo;s worth noting in particular that, compared with frame-by-frame representations, this one is particularly &lt;em&gt;sparse&lt;/em&gt;: in particular, a signal with very few changes can be represented by just a few events. This is a very useful feature, not only because it saves &lt;em&gt;bandwidth&lt;/em&gt;, but also because it allows us to concentrate the &lt;em&gt;computations&lt;/em&gt; around the few events that represent the image. It&amp;rsquo;s also a fundamental feature of neuron function in the brain, and we&amp;rsquo;ll come back to it later.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-10"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;!--
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/events.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Finally, we obtain a list of events for each pixels which can be &lt;em&gt;merged&lt;/em&gt; for the image as a whole, forming a list of events, including pixel addresses, times of occurrence and polarities. As they are generated over time, they are naturally arranged in order of occurrence. All these events are then transmitted in &lt;em&gt;real time&lt;/em&gt; to the output bus, typically by means of a USB3 connection. Note the analogy between this representation and the one made in the optic nerve that connects our retina to the rest of the brain: indeed, the million ganglion cells that make up the retina&amp;rsquo;s output emit action potentials, which are the only source of information that leaves the retina via the &lt;em&gt;optic nerve&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg" target="_blank" rel="noopener"&gt;https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-11"&gt;Event-Based Camera&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sensor&lt;/th&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Framerate&lt;/th&gt;
&lt;th&gt;Resolution&lt;/th&gt;
&lt;th&gt;Power&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Human eye&lt;/td&gt;
&lt;td&gt;60 (?) dB&lt;/td&gt;
&lt;td&gt;300 (?) fps&lt;/td&gt;
&lt;td&gt;100 (?) Mpx&lt;/td&gt;
&lt;td&gt;10 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DSLR&lt;/td&gt;
&lt;td&gt;44.6 dB&lt;/td&gt;
&lt;td&gt;120 fps&lt;/td&gt;
&lt;td&gt;2&amp;ndash;20 Mpx&lt;/td&gt;
&lt;td&gt;30 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ultra-high speed&lt;/td&gt;
&lt;td&gt;64 dB&lt;/td&gt;
&lt;td&gt;10^4 fps&lt;/td&gt;
&lt;td&gt;0.3&amp;ndash;4 Mpx&lt;/td&gt;
&lt;td&gt;300 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event-based&lt;/td&gt;
&lt;td&gt;120 dB&lt;/td&gt;
&lt;td&gt;10^6 fps&lt;/td&gt;
&lt;td&gt;0.1&amp;ndash;2 Mpx&lt;/td&gt;
&lt;td&gt;30 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;There are several properties of event-driven cameras that make them remarkable. First of all, the &lt;em&gt;temporal precision&lt;/em&gt; of events is of the order of microseconds, enabling a theoretical frame rate of the order of a million images per second to be reached. This can be compared with a conventional camera, which is of the order of a hundred images per second, or with a high-speed camera, which can reach 10,000 images per second. It is difficult to estimate the sampling frequency of human perception, because while 25 frames per second is often sufficient for movie viewing, it has been shown that the human eye can distinguish temporal details up to 300 or even 1,000 frames per second. It&amp;rsquo;s worth noting that the &lt;em&gt;spatial resolution&lt;/em&gt; of these event cameras is often relatively modest, in the order of megapixels, but this is not a technical limitation, but rather due to the technological applications in which these cameras are commonly used. Compared with conventional cameras, which will consume several watts, event cameras consume very little electrical &lt;em&gt;energy&lt;/em&gt;, in the order of 10 milliwatts, a consumption equivalent to that of the human eye. Another important feature of these cameras is their ability to detect a very wide &lt;em&gt;range&lt;/em&gt; of luminosity, far exceeding that of conventional cameras at 120 dB (a factor of a million, compared with the human eye&amp;rsquo;s factor of 1 in a thousand between full moon and full sun),&lt;/p&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Event_camera#Functional_description" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Event_camera#Functional_description&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;more in &lt;a href="https://arxiv.org/pdf/1904.08405.pdf" target="_blank" rel="noopener"&gt;https://arxiv.org/pdf/1904.08405.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-12"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This ability to &lt;em&gt;adapt&lt;/em&gt; to changing light conditions can be illustrated by going back to our analog signal and its event representation, and imagining. A typical example would be an autonomous car driving in daylight, entering and leaving a &lt;em&gt;tunnel&lt;/em&gt;, involving changes in brightness by a factor of several thousand.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-13"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_low.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Here we have a division by a factor 8 of the signal in the middle section. It will be reported by a frame-based camera. In an event-based camera, this is represented here by a &lt;em&gt;sharp decrement&lt;/em&gt; in log intensity space and clearly indicated by events of negative polarity, but we can see that since this is a camera that uses log intensity, dividing the light signal produces the &lt;em&gt;same signal&lt;/em&gt; course over time, and therefore events that are identical. Event-driven cameras are therefore particularly well-suited to &lt;em&gt;dynamic signals&lt;/em&gt;, where the lighting context can change drastically.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-computer-vision"&gt;Event-Based Computer vision&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;These cameras therefore look very promising for future applications, particularly for embedded applications, but also for applications linked to scientific experiments. However, we can see that the image &lt;em&gt;representation&lt;/em&gt; is completely different, that is, we can no longer consider static images that follow one another at a regular rate, and for which we could have applied the algorithms that have been developed for decades in the field of &lt;em&gt;computer vision&lt;/em&gt;. We end up with a signal that corresponds to events that are transmitted as a stream from the camera. And we have to reinvent all computer vision algorithms to make them &lt;em&gt;event-driven&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;TODO: the process is active driven by the signal compared to acquired&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-recognition-dvs-gesture"&gt;Always-on Object Recognition: DVS gesture&lt;/h2&gt;
&lt;p&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_arm-roll.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_hand-clap.webp" width="33%"/&gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/DVSGesture_air-guitar.webp" width="33%"/&gt;&lt;/p&gt;
&lt;!--
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/hand_clap.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/air_guitar.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/right_hand_clockwise.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/hand_clap.gif" width="33%"/&gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/air_guitar.gif" width="33%"/&gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/right_hand_clockwise.gif" width="33%"/&gt;--&gt;
&lt;!-- !"" width="33%" &gt;}}
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/air_guitar.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://research.ibm.com/interactive/dvsgesture/images/right_hand_clockwise.gif" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
--&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-recognition"&gt;Always-on Object Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/hots.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The first algorithm we developed with Antoine Grimaldi, who is a PhD student, and in collaboration with Sio Ieng and Ryad Benosman of Sorbonne University, who are recognized researchers in the development of this type of camera, is an improvement on an existing algorithm, &lt;em&gt;HOTS&lt;/em&gt;. This algorithm uses a relatively classical convolutional and hierarchical information processing architecture, which passes information &amp;ldquo;forward&amp;rdquo; from the camera and its event representation, and then through different processing layers to converge on a high-level representation that can be used for classification, in this case to recognize the identity of the digit presented as input, i.e. an eight digit. A fundamental feature of this algorithm is that it transforms the event representation into multiplexed, parallel channels, which analogously represent the temporal pattern of events, or &amp;ldquo;&lt;em&gt;temporal surface&lt;/em&gt;&amp;rdquo;. These are represented in the different layers by the individual temporal surfaces. An interesting feature of this algorithm is that learning in each of the layers is &lt;em&gt;unsupervised&lt;/em&gt;, which is a significant improvement over conventional deep learning algorithms that assume that a classification error signal can be back-propagated along the entire hierarchy, which is notoriously incorrect. Starting from this algorithm, we improved it by including neuro-biological knowledge, especially about the balance between different parallel communication pathways by including &lt;em&gt;homeostasis&lt;/em&gt; rules.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-gesture-recognition"&gt;Always-on Object Gesture Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_offline.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To illustrate the results of our algorithm, we applied a classic camera dataset involving the classification of 10 different types of human &lt;em&gt;gestures&lt;/em&gt;. These biological movements are, for example, clapping hands, saying hello or a drum movement. The chance level is therefore at 10%, and we have observed that when all events have been processed, the &lt;em&gt;original&lt;/em&gt; algorithm achieves a performance of around 70%. By adding &lt;em&gt;homeostasis&lt;/em&gt;, we have reached a higher level of 82%, demonstrating the usefulness of using neuroscientific knowledge to improve machine learning algorithms.&lt;/p&gt;
&lt;p&gt;We also built on a fundamental characteristic of biological systems. In fact, this kind of algorithm is classically used to process the flow of events, but classification is only used as a last resort when all the events have been processed. We have modified the algorithm so that this classification can be done &lt;em&gt;online&lt;/em&gt;, in real time, event by event. In this way, processing in the various layers is triggered by the arrival of each event, which is propagated from the camera through all the layers to the classification layer.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-gesture-recognition-1"&gt;Always-on Object Gesture Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_online.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
What&amp;rsquo;s more interesting is that we were also able to show the &lt;em&gt;evolution&lt;/em&gt; of the average performance obtained on a data set, and as a function of the number of events processed by the algorithm. The blue curve shows that if below 10 events, we remain at the level of chance, we then experience a gradual increase in performance that reaches the level of the original algorithm with ten thousand events, and exceeds this &lt;em&gt;performance&lt;/em&gt; when we have even 10 times more. A major advantage of this algorithm is that it can be asked to classify the nature of what it sees in its event camera, not once the entire signal has been processed by the system, but at any time. This characteristic is essential in biology. For example, imagine you&amp;rsquo;re on the savannah and a &lt;em&gt;lion&lt;/em&gt; jumps out at you. You won&amp;rsquo;t have the flexibility to wait for the video sequence to finish processing before making the right decision, which is to flee. Another variant in our algorithm consists of selecting the output classification events based on a calculation of the precision for each event. By using a &lt;em&gt;threshold&lt;/em&gt; on this precision, we can achieve a very good level of performance, with just a hundred events, and so achieve a characteristic that is common in biological networks, i.e. that a decision is not taken gradually, but emerges abruptly (here after 200 events) and then improves and stabilizes.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h1&gt;
&lt;aside class="notes"&gt;
We have therefore illustrated the use of &lt;em&gt;event-driven&lt;/em&gt; cameras on a particular algorithm. This algorithm has the particularity of processing the flow of events coming from the camera event by event, so that potentially each of these events triggers a cascade of mechanisms in the different processing layers, and thus enables a classification value to be updated at any given moment. This type of operation is characteristic of the way neurons work in the brain, i.e. using an event-based representation of information processing. This is what we call &lt;em&gt;spiking neural networks&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-tonic-manualhttpstonicreadthedocsioenlatest_imagesneuron-modelspng"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://tonic.readthedocs.io/en/latest/_images/neuron-models.png" alt="[[Tonic manual](https://tonic.readthedocs.io/en/latest/_images/neuron-models.png)]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://tonic.readthedocs.io/en/latest/_images/neuron-models.png" target="_blank" rel="noopener"&gt;Tonic manual&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Indeed, most neural networks used in deep learning use an analog representation. This is illustrated in this figure, which represents the various analog inputs to a formal neuron as they are linearly integrated by the synapses, then transformed by a non-linear function to generate an activation which is itself analog. This basic &lt;em&gt;perceptron&lt;/em&gt; principle is at the foundation of all existing neural networks, and in particular enables the construction of convolutional-type networks which are currently the champions for image classification, having outperformed human performance for several years. However, while this is true for static images, it can become prohibitively expensive with videos. This is why it can be interesting to use &lt;em&gt;spiking&lt;/em&gt; neurons instead, which, instead of receiving an analog input, will receive events that will trigger cascades of mechanisms in the neuronal cell, notably represented by the cell&amp;rsquo;s membrane potential. Typically, we&amp;rsquo;ll include a threshold for triggering action potential in this cell, which will generate new output events on the cell&amp;rsquo;s axon.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-lif-neuron"&gt;Spiking Neural Networks: LIF Neuron&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This is illustrated in this &lt;em&gt;animation&lt;/em&gt;, which shows how we can transform a list of input events by giving them different weights, and then &lt;em&gt;integrate&lt;/em&gt; them into the cell&amp;rsquo;s soma to generate output events.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-neuromorphic-hardware"&gt;Spiking Neural Networks: neuromorphic hardware&lt;/h2&gt;
&lt;figure id="figure-loihi-2"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg" alt="Loihi 2" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Loihi 2
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;This new type of representation represents a &lt;em&gt;paradigm shift&lt;/em&gt; in computation, in the same way that event-driven cameras have brought with them a paradigm shift in image representation. The development of these two new algorithms, which use impulse neural networks, is accompanied by the development of new neuromorphic chips, such as the Loihi 2 chip developed by Intel, which replaces a central computing unit with a massively parallelized &lt;em&gt;array&lt;/em&gt; of elementary event-driven computing units. As with event-driven cameras, this has the dual advantage of being very fast and consuming very little energy. Other types of &lt;em&gt;neuromorphic chips&lt;/em&gt; are currently being developed and may soon be used instead of conventional CPUs or GPUs.&lt;/p&gt;
&lt;figure id="figure-propheseehttpsdocspropheseeaistableconceptshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg" alt="[Prophesee](https://docs.prophesee.ai/stable/concepts.html)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://docs.prophesee.ai/stable/concepts.html" target="_blank" rel="noopener"&gt;Prophesee&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Loihi: &lt;a href="https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg" target="_blank" rel="noopener"&gt;https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg?lossy=0&amp;amp;strip=none&amp;amp;ssl=1" target="_blank" rel="noopener"&gt;https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg?lossy=0&amp;strip=none&amp;ssl=1&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Spiking neural networks therefore seem very promising for processing the output of event-driven cameras, but the study of &lt;em&gt;neurophysiology&lt;/em&gt; shows us that their operation can sometimes seem incongruous and far from the perceptron. In this first example, taken from an article by Mainen and Sejnowski from 1995, we see the response of the same neuron to several &lt;em&gt;repetitions&lt;/em&gt; of a stimulation in panel A. At the top, we see the membrane potential of this neuron in response to a 200 Pico ampere &lt;em&gt;current step&lt;/em&gt;, which shows that the membrane potential is not reproducible across different trials. This is illustrated by showing the spike response over time for the different trials, which shows a strong alignment at the start of stimulation, but that this diffuses little by little, so that after around 750 milliseconds there is no longer any coherence between the different trials. The situation is different in panel B, where the neuron is stimulated with &lt;em&gt;noise&lt;/em&gt;. In this case, the responses are so precise for the different trials that the membrane potential traces are overlapping almost exactly. The subtlety of this paper lies in its use of a &lt;em&gt;frozen&lt;/em&gt; noise, i.e. one that is repeated unchanged across trials. In this way, it demonstrates that neurons are not so much sensitive to analog values presented in the form of square pulses, but rather to dynamic signals for which they will respond with very high precision in the dynamic domain.&lt;/p&gt;
&lt;/aside&gt;
&lt;!--
---
## Spiking Neural Networks in neurobiology
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-1"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this other example, I show a simulation that reproduces the 1999 paper by Diesmann and colleagues. This &lt;em&gt;theoretical model&lt;/em&gt; considers ten groups of 100 neurons that are connected from group to group. An interesting property of this system is to show that for the same stimulation, i.e. for the same number of spikes, information can propagate from group to group only if it is sufficiently &lt;em&gt;concentrated in time&lt;/em&gt;. For the first two groups, the information is too dispersed in the first group and spreads progressively and increasingly in subsequent groups. Above a certain threshold, the information formed by a group of relatively synchronous spikes is correctly transmitted to the various groups in the network. This &lt;em&gt;non-linear&lt;/em&gt; behavior is one of the characteristics of spiking networks, giving them a certain richness, but also a certain complexity.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-2"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
A third example shows an experiment conducted by Rosa Cossart&amp;rsquo;s group at INMED and recently published by Haimerl and colleagues. It shows the results of &lt;em&gt;calcium fluorescence&lt;/em&gt; imaging recordings in mice. By arranging the different neurons in &lt;em&gt;temporal order of activation&lt;/em&gt;, it shows a sequential activation of these neurons, a mechanism which resembles the model mentioned earlier. These activation groups are strongly correlated with the &lt;em&gt;motor behavior&lt;/em&gt; of the mouse, as described in the graph at the top. Of particular interest is the fact that these sequences of activity are stable over time and can be recorded on a &lt;em&gt;subsequent day&lt;/em&gt;. This illustrates the importance of dynamics in the integration of neural computations.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks-spiking-motifs"&gt;Spiking Neural Networks: Spiking motifs&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;These observations have led us to &lt;em&gt;review&lt;/em&gt; neurobiological evidence around the existence of a neural representation that would use the relative time of spikes as a means of representing information. In particular, it is possible to use the conduction &lt;em&gt;delays&lt;/em&gt; that exist in the transmission of spikes from one neuron to another. It may seem paradoxical, but these delays are not simply a constraint, but can help to improve our ability to represent information by way of &lt;em&gt;spiking motifs&lt;/em&gt;.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-1"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If we consider, for example, this ultra-simplified network consisting of three presynaptic neurons and two output neurons connected by &lt;em&gt;heterogeneous&lt;/em&gt; delays, then we can see that a &lt;em&gt;synchronous&lt;/em&gt; input will generate membrane activity in the two output neurons at different times, so the threshold will never be reached, and these neurons will not produce an output impulse. On the other hand, if these delays are such that the action potentials converge on the neuron at the same instant, then these contributions will be able to sum up at the &lt;em&gt;same instant&lt;/em&gt; and produce an output spike, as denoted here by the red bar.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-2"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To better understand this mechanism, let&amp;rsquo;s return to our animation of a spiking neuron. Action potentials arrive at the neuron and are &lt;em&gt;immediately&lt;/em&gt; transmitted to the neuron&amp;rsquo;s cell body to be integrated and potentially generate a spike.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-3"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When using &lt;em&gt;heterogeneous&lt;/em&gt; delays, the situation is different, as the information will take a differential time to arrive or not at the neuron&amp;rsquo;s cell body. Note that if we include a particular &lt;em&gt;spiking motif&lt;/em&gt;, which we have here highlighted by green action potentials, then these converge at the same instant thanks to the delay. We will therefore have a detection in the neuron in the form of a new impulse.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
We used this theoretical principle in an algorithm for detecting movement in an image. To do this, we first generated event data using natural images that are set in motion along trajectories that resemble those produced by free exploration of the visual scene. You&amp;rsquo;ll notice several features of the event-driven output, such as the fact that faster motion generates more spikes, or that edges oriented parallel to one direction produce few changes, and therefore little spike output - the so-called aperture problem.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-1"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://raw.githubusercontent.com/laurentperrinet/figures/7f382a8074552de1a6a0c5728c60d48788b5a9f8/animated_neurons/conv_HDSNN.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We then used a neural network with a classical architecture, which we enhanced by using an impulse representation that takes into account different possible synaptic delays. In this figure, we have represented the input in the left grid, which represents the occurrence of spikes of positive or negative polarity. Then we have represented different processing channels denoted by the colors green and orange, which are applied to this input to produce membrane activity. As illustrated above, this activity will produce output pulses, notably in synaptic connection nuclei, with heterogeneous delays corresponding to the detection of precise spatio-temporal patterns.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-2"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One advantage of this network is that it is differentiable, enabling us to apply classical machine learning methods, notably supervised learning. We then see the emergence of different convolution kernels, and here I represent a subset of its kernels for different directions, as denoted by the red arrows on the left of the graph. It shows the kernels obtained on the spatial representation according to the different columns, and each row represents the different delays from a delay of one on the right to a delay of 12 time steps on the left. Detectors that follow the motion emerge. For example, for the top line from top to bottom. These kernels integrate both positive neurons in red and negative polarity inputs in blue.
Such spatio-temporal filtering is observed in neurobiology, but to my knowledge had never been observed in a model of spiking neurons trained under natural conditions.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-3"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_raw.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We will now study the performance of this network in detecting motion in the flow of events entering the network. When we use all the weights of the convolution kernel, we get a very good performance of the order of 99%, represented by the black dot in the top right-hand corner. Note that in the kernels we&amp;rsquo;ve seen emerge, most of the synaptic weights are close to zero, so we might consider removing some of these weights, as this can be shown to reduce the number of event calculations required.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-4"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_shortening.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
This is what we&amp;rsquo;ve done, by first removing the parts of the core corresponding to the longest delays. This &amp;ldquo;shortens&amp;rdquo; the kernel. We quickly observed a degradation in performance, which reached half-saturation when we reduced the number of weights by around 50%. This demonstrates the importance of integrating information that is quite distant and structured over time.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-5"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In a second step, we performed a pruning operation, which consists in progressively removing the weights that are the weakest. This time, performance remains optimal over a wide compression range, and we reach half-saturation when we have removed around 99.8% of the weights. This means that the network is able to maintain very good performance, even when only one weight out of 600 has been kept, and therefore, with a computation time increased by a factor of 600. This property, which we didn&amp;rsquo;t expect, seems promising for creating machine learning algorithms that are less energy-hungry.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-vision-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-12-01-biocomp/?transition=fade" target="_blank" rel="noopener"&gt;Event-based vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-séminaire-colloque-biocomp-2023-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-12-01-biocomp" target="_blank" rel="noopener"&gt;[2023-12-01]&lt;/a&gt; &lt;a href="http://gdr-biocomp.fr/colloque-biocomp-2023/" target="_blank" rel="noopener"&gt;Séminaire colloque BioComp 2023&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;p&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
In conclusion, we have seen that event-driven cameras open the door to new applications that mimic the performance of the human eye, in terms of computational dynamics, adaptation to light conditions and energy constraints. This technological development has recently been accompanied by the development of neuromorphic chips and innovative algorithms in the form of spiking neural networks. However, there is still a great deal of progress to be made at theoretical level, particularly in the understanding of these spiking neural networks, and we have shown the potential progress that can be made by exploiting the richness of temporal representations, particularly by taking advantage of heterogeneous delays.
Beyond these particular applications to natural image processing, I hope to have succeeded in demonstrating the importance of cross-fertilizing the field of engineering applications in general with biological neuroscience. This new line of research - known as NeuroAI or, more generally, as computational neuroscience - is likely to develop over the next few years. Thank you for your attention.
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2023-11-07-snufa.md</title><link>https://laurentperrinet.github.io/slides/2023-11-07-snufa/</link><pubDate>Tue, 07 Nov 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-11-07-snufa/</guid><description>&lt;h2 id="accurate-detection-of-spiking-motifs-by-learning-heterogeneous-delays-of-a-spiking-neural-network"&gt;&lt;strong&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-11-07-snufa/?transition=fade" target="_blank" rel="noopener"&gt;Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network&lt;/a&gt;&lt;/strong&gt;&lt;/h2&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="snufa-spiking-neural-networks-as-universal-function-approximators"&gt;&lt;em&gt;&lt;strong&gt;&lt;a href="https://snufa.net/2023/" target="_blank" rel="noopener"&gt;SNUFA: Spiking Neural networks as Universal Function Approximators&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/talk/2023-09-27_icann/qrcode.png" alt="qrcode" height="130"/&gt; --&gt;
&lt;p&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;sup&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-11-07-snufa" target="_blank" rel="noopener"&gt;https://laurentperrinet.github.io/talk/2023-11-07-snufa&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;em&gt;Hello&lt;/em&gt;, I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and during this talk at SNUFA, I&amp;rsquo;ll be presenting a method for the &lt;em&gt;Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network&lt;/em&gt;, and how it may also impact the design of SNNs. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: first, I&amp;rsquo;ll describe how one may perform computations using Heterogeneous Delays - and present a toy model example; then, I&amp;rsquo;ll show real scale example quantifying the performance on synthetic data ; and finally, I&amp;rsquo;ll present how this SNN is in fact differentiable and may be extended for future applications.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="core-mechanism-of-spiking-motif-detection"&gt;Core Mechanism of Spiking Motif Detection&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_left.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The core idea of the method follows the use of polychronous groups as defined by Izhikevich in 2006. Suppose three presynaptic neurons are connected to two postsynaptic neurons by certains weights and certain delays, which correspond to the time it takes for a spike to travel from one neuron to the next.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="core-mechanism-of-spiking-motif-detection-1"&gt;Core Mechanism of Spiking Motif Detection&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_middle.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
If we assume these delays are different, then if presynaptic neurons are activated synchronously, then postsynaptic currents do not match in time, such that the membrane potential is not reached.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="core-mechanism-of-spiking-motif-detection-2"&gt;Core Mechanism of Spiking Motif Detection&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
However, if the timing of presynaptic spikes forms a &lt;em&gt;spiking motif&lt;/em&gt; such that they reach the soma of neuron b_1 at the same time then this neuron will be selectively activated.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="from-generating-raster-plots-to-inferring-spiking-motifs"&gt;From generating raster plots to inferring spiking motifs&lt;/h2&gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-a_k.svg" width="42%"&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-b.svg" width="42%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-c.svg" width="42%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-a.svg" width="42%"&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;em&gt;A&lt;/em&gt; In this work, this principle was framed in a probabilistic setting such that we could provide an optimal scheme for detecting generic spiking motifs which may be superposed at random times. Starting with 10 presynaptic inputs, this model allows to generate a synthetic raster plot as the combination of four different spiking motifs.
&lt;em&gt;B&lt;/em&gt; These motifs are defined by a positive (red) or negative (blue) contribution to the spiking probability which are represented here.
&lt;em&gt;C&lt;/em&gt; Applying a Bayesian approach, we may define four formal spiking neurons which will integrate the incoming spiking information from the presynaptic neurons - this analog signal can then be thresholded to give the detection of each spiking motif (vertical) bar which was here always exact with respect to the ground truth (stars).
&lt;em&gt;D&lt;/em&gt; The beauty of this is that we can recover in the presynaptic raster plot the contribution of each spiking motif to the original raster plot.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="detecting-spiking-motifs-using-heterogeneous-delays"&gt;Detecting spiking motifs using heterogeneous delays&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_SMs.svg" width="31%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_pre.svg" width="31%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_SM_time.svg" width="31%"&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
This was a toy example and let&amp;rsquo;s now quantify the performance of this method in real scale settings by measuring the accuracy of finding the right SM at the right time. For this we will compare our method to a classical approach using the correlation.
First, by increasing the number of motifs, we show that the accuracy of our method (in blue) is very high and outperforms the cross-correlation method (red), in particular as the number of SMs increases. The same trend is shown also when the number of presynaptic inputs increases from a low to a high dimension. Finally, the number of possible delays is a crucial parameter and enough heterogenous delays are necessary to reach a good performance.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="detecting-spiking-motifs-using-heterogeneous-delays-supervised-learning"&gt;Detecting spiking motifs using heterogeneous delays: supervised learning&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_xcorr-supervised.svg" width="62%"&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
An advantage of our method is that it is fully differentiable. We thus applied a supervised learning method and starting with random weights, we could recover the spiking motifs, as is shown here in this cross-correlagram of the weights of the learned werights with respect to the ground truth.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="accurate-detection-of-spiking-motifs-by-learning-heterogeneous-delays-of-a-spiking-neural-network-1"&gt;&lt;strong&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-11-07-snufa/?transition=fade" target="_blank" rel="noopener"&gt;Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network&lt;/a&gt;&lt;/strong&gt;&lt;/h2&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="snufa-spiking-neural-networks-as-universal-function-approximators-1"&gt;&lt;em&gt;&lt;strong&gt;&lt;a href="https://snufa.net/2023/" target="_blank" rel="noopener"&gt;SNUFA: Spiking Neural networks as Universal Function Approximators&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;p&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;sup&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-11-07-snufa" target="_blank" rel="noopener"&gt;https://laurentperrinet.github.io/talk/2023-11-07-snufa&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;As a conclusion, this heterogenous delay spiking neural network provides an efficient neural computation. It has some limitations that we detail in the paper, notably that it works on discrete time and that it is supervised, yet we hope to deliver soon an unsupervised learning method using this computational brick which could be used to build novel SNNs - we did that for detecting motion in event-based data - but also to analyse neurobiological data.&lt;/p&gt;
&lt;p&gt;Thanks for your attention, slides are also available online&lt;/p&gt;
&lt;/aside&gt;</description></item><item><title>2023-09-27_icann.md</title><link>https://laurentperrinet.github.io/slides/2023-09-27_icann/</link><pubDate>Wed, 27 Sep 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-09-27_icann/</guid><description>&lt;h2 id="accurate-detection-of-spiking-motifs-by-learning-heterogeneous-delays-of-a-spiking-neural-network"&gt;&lt;strong&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-09-27_icann/?transition=fade" target="_blank" rel="noopener"&gt;Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network&lt;/a&gt;&lt;/strong&gt;&lt;/h2&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="icann-workshop-on-recent-advances-in-snns"&gt;ICANN workshop on &lt;em&gt;&lt;strong&gt;&lt;a href="https://e-nns.org/icann2023/wp-content/uploads/sites/7/2023/04/ICANN2023-ASNN-CfP.pdf" target="_blank" rel="noopener"&gt;Recent Advances in SNNs&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/talk/2023-09-27_icann/qrcode.png" alt="qrcode" height="130"/&gt; --&gt;
&lt;p&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;sup&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-09-27-icann" target="_blank" rel="noopener"&gt;https://laurentperrinet.github.io/talk/2023-09-27-icann&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;em&gt;Hello&lt;/em&gt;, I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and during this talk at this ICANN workshop on Recent Advances in SNNs, I&amp;rsquo;ll be presenting a method for the &lt;em&gt;Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network&lt;/em&gt;, and how it may also impact the design of SNNs. I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; Sander Bohté and Sebastian Otte for the organization of this workshop and you for listening. These slides are available from my web-site, along with a number of references. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: first, I&amp;rsquo;ll describe how one may perform computations using Heterogeneous Delays - and present a toy model example; then, I&amp;rsquo;ll show real scale example quantifying the performance on synthetic data ; and finally, I&amp;rsquo;ll present how this SNN is in fact differentiable and may be extended for future applications.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="core-mechanism-of-spiking-motif-detection"&gt;Core Mechanism of Spiking Motif Detection&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_left.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The core idea of the method follows the use of polychronous groups as defined by Izhikevich in 2006. Suppose three presynaptic neurons are connected to two postsynaptic neurons by certains weights and certain delays, which correspond to the time it takes for a spike to travel from one neuron to the next.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="core-mechanism-of-spiking-motif-detection-1"&gt;Core Mechanism of Spiking Motif Detection&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_middle.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
If we assume these delays are different, then if presynaptic neurons are activated synchronously, then postsynaptic currents do not match in time, such that the membrane potential is not reached.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="core-mechanism-of-spiking-motif-detection-2"&gt;Core Mechanism of Spiking Motif Detection&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
However, if the timing of presynaptic spikes forms a &lt;em&gt;spiking motif&lt;/em&gt; such that they reach the soma of neuron b_1 at the same time then this neuron will be selectively activated.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="from-generating-raster-plots-to-inferring-spiking-motifs"&gt;From generating raster plots to inferring spiking motifs&lt;/h2&gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-a_k.svg" width="42%"&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-b.svg" width="42%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-c.svg" width="42%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_toy-a.svg" width="42%"&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;em&gt;A&lt;/em&gt; In this work, this principle was framed in a probabilistic setting such that we could provide an optimal scheme for detecting generic spiking motifs which may be superposed at random times. Starting with 10 presynaptic inputs, this model allows to generate a synthetic raster plot as the combination of four different spiking motifs.
&lt;em&gt;B&lt;/em&gt; These motifs are defined by a positive (red) or negative (blue) contribution to the spiking probability which are represented here.
&lt;em&gt;C&lt;/em&gt; Applying a Bayesian approach, we may define four formal spiking neurons which will integrate the incoming spiking information from the presynaptic neurons - this analog signal can then be thresholded to give the detection of each spiking motif (vertical) bar which was here always exact with respect to the ground truth (stars).
&lt;em&gt;D&lt;/em&gt; The beauty of this is that we can recover in the presynaptic raster plot the contribution of each spiking motif to the original raster plot.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="detecting-spiking-motifs-using-heterogeneous-delays"&gt;Detecting spiking motifs using heterogeneous delays&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_SMs.svg" width="31%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_pre.svg" width="31%"&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_N_SM_time.svg" width="31%"&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
This was a toy example and let&amp;rsquo;s now quantify the performance of this method in real scale settings by measuring the accuracy of finding the right SM at the right time. For this we will compare our method to a classical approach using the correlation.
First, by increasing the number of motifs, we show that the accuracy of our method (in blue) is very high and outperforms the cross-correlation method (red), in particular as the number of SMs increases. The same trend is shown also when the number of presynaptic inputs increases from a low to a high dimension. Finally, the number of possible delays is a crucial parameter and enough heterogenous delays are necessary to reach a good performance.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="detecting-spiking-motifs-using-heterogeneous-delays-1"&gt;Detecting spiking motifs using heterogeneous delays&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/THC_xcorr-supervised.svg" width="62%"&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
An advantage of our method is that is is fully differentiable. We thus applied a supervised learning method and starting with random weights, we could recover the spiking motifs, as is shown here in this cross-correlagram of the weights of the learned werights with respect to the ground truth.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="accurate-detection-of-spiking-motifs-by-learning-heterogeneous-delays-of-a-spiking-neural-network-1"&gt;&lt;strong&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-09-27_icann/?transition=fade" target="_blank" rel="noopener"&gt;Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network&lt;/a&gt;&lt;/strong&gt;&lt;/h2&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="icann-workshop-on-recent-advances-in-snns-1"&gt;ICANN workshop on &lt;em&gt;&lt;strong&gt;&lt;a href="https://e-nns.org/icann2023/wp-content/uploads/sites/7/2023/04/ICANN2023-ASNN-CfP.pdf" target="_blank" rel="noopener"&gt;Recent Advances in SNNs&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;p&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;sup&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-09-27-icann" target="_blank" rel="noopener"&gt;https://laurentperrinet.github.io/talk/2023-09-27-icann&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;As a conclusion, this heterogenous delay spiking neural network provides an efficient neural computation. It has some limitations that we detail in the paper, notably that it works on discrete time and that it is supervised, yet we hope to deliver soon an unsupervised learning method using this computational brick which could be used to build novel SNNs - we did that for detecting motion in event-based data - but also to analyse neurobiological data.&lt;/p&gt;
&lt;p&gt;Thanks for your attention, slides are also available online&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="accurate-detection-of-spiking-motifs-by-learning-heterogeneous-delays-of-a-spiking-neural-network-2"&gt;&lt;strong&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-09-27_icann/?transition=fade" target="_blank" rel="noopener"&gt;Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network&lt;/a&gt;&lt;/strong&gt;&lt;/h2&gt;
&lt;img src="https://laurentperrinet.github.io/talk/2023-09-27-icann/qrcode.png" alt="qrcode" width="45%"/&gt;
&lt;p&gt;&lt;sup&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-09-27-icann" target="_blank" rel="noopener"&gt;https://laurentperrinet.github.io/talk/2023-09-27-icann&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&amp;hellip; by scanning this qrcode!
&lt;/aside&gt;</description></item><item><title>2023-09-08_fresnel.md</title><link>https://laurentperrinet.github.io/slides/2023-09-08_fresnel/</link><pubDate>Fri, 08 Sep 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-09-08_fresnel/</guid><description>&lt;section&gt;
&lt;h1 id="event-based-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-09-08_fresnel/?transition=fade" target="_blank" rel="noopener"&gt;Event-based vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-séminaire-institut-fresnel"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-09-08-fresnel" target="_blank" rel="noopener"&gt;[2023-09-08]&lt;/a&gt; &lt;a href="https://www.fresnel.fr/spip/spip.php?article2453&amp;amp;lang=fr" target="_blank" rel="noopener"&gt;Séminaire institut Fresnel&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/talk/2023-09-08_fresnel/qrcode.png" alt="qrcode" height="130"/&gt; --&gt;
&lt;p&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-09-08-fresnel/" target="_blank" rel="noopener"&gt;https://laurentperrinet.github.io/talk/2023-09-08-fresnel/&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;em&gt;Hello&lt;/em&gt;, I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and during this seminar at the Institut Fresnel, I&amp;rsquo;ll be presenting &lt;em&gt;event-driven cameras&lt;/em&gt;, a new technology in the field of imaging, and the impact of this technology on our understanding of vision. I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; Loic le Goff for his kind invitation, and all of you for coming. These slides are available from my website, along with a number of references. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: first, we&amp;rsquo;ll describe what an event-driven camera is - in particular, by comparing it to a conventional camera; then, we&amp;rsquo;ll show some examples of applications of these cameras with dedicated algorithms; and finally, we&amp;rsquo;ll present how our knowledge of biological mechanisms in neuroscience can enable us to improve these algorithms.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="sensing-light"&gt;Sensing light&lt;/h1&gt;
&lt;aside class="notes"&gt;
First of all, the general aim of &lt;em&gt;imaging&lt;/em&gt; is to represent a light signal, i.e. a luminous intensity, a color, distributed over the visual field, giving us a vivid impression of the visual scene before our eyes.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="http://lepassetempsderose.l.e.pic.centerblog.net/fddea7fb.gif"
&gt;
&lt;aside class="notes"&gt;
This is perfectly illustrated in this &lt;em&gt;galloping horse&lt;/em&gt;. We get a &lt;em&gt;vivid&lt;/em&gt; impression of movement. Thanks to a rapid sequence of still images consistent with the scene being represented. This technique clearly exploits a visual &lt;em&gt;illusion&lt;/em&gt;, because we know that at each point in the visual space, the light signal is made up of a &lt;em&gt;continuous&lt;/em&gt; stream of an analogous signal representing the energy of the photos.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif"
&gt;
&lt;aside class="notes"&gt;
This technique is inspired by the research carried out by Etienne-Jules &lt;em&gt;Marey&lt;/em&gt; (&lt;a href="https://en.wikipedia.org/wiki/Etienne-Jules_Marey%29" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Etienne-Jules_Marey)&lt;/a&gt;, who gave his name to the ISM, under the term &lt;em&gt;chronophotography&lt;/em&gt;, which notably enabled later Muybridge to demonstrate the mechanism of a horse&amp;rsquo;s gallop. In particular, Marey literally used a camera mounted on a &lt;em&gt;gun&lt;/em&gt;-like structure to shoot a visual scene.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://media.giphy.com/media/4Y8PqJGFJ21CE/giphy.gif"
&gt;
&lt;aside class="notes"&gt;
The use of such dynamic &lt;em&gt;visualization&lt;/em&gt; is crucial in the scientific field, whether in biology or physics, as it enables us to quantify the characteristics of the experiment being carried out - I&amp;rsquo;m thinking, for example, of quantifying the movements and number of bacteria in a biological assay.
&lt;/aside&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-image="https://hackaday.com/wp-content/uploads/2018/04/saccades.gif"
&gt;
&lt;aside class="notes"&gt;
In the laboratory, we use it in particular to quantify &lt;em&gt;eye movements&lt;/em&gt; when a stimulus is presented to an observer.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="representing-spatio-temporal-luminous-information"&gt;Representing spatio-temporal luminous information&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/analog_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To better understand the mechanism behind this technology, let&amp;rsquo;s imagine that we represent a &lt;em&gt;single pixel&lt;/em&gt; in the space of the visual field. Here, I&amp;rsquo;ve taken a grayscale &lt;em&gt;video&lt;/em&gt; from an episode from the Monty Python Flying Circus TV series. In this way, we can represent the evolution of the &lt;em&gt;log intensity&lt;/em&gt; of the light signal as a function of time.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://4.bp.blogspot.com/-AHprBxkfu5o/UJ-lqR7GsmI/AAAAAAAAHpo/VJzY7HMuXe0/s1600/The&amp;#43;Horse&amp;#43;in&amp;#43;Motion,&amp;#43;1878.%C2%A0Eadweard&amp;#43;Muybridge&amp;#43;%28b.&amp;#43;9&amp;#43;April,&amp;#43;1830%29The&amp;#43;first&amp;#43;movie&amp;#43;ever&amp;#43;made,&amp;#43;from&amp;#43;still&amp;#43;photographs..gif" target="_blank" rel="noopener"&gt;http://4.bp.blogspot.com/-AHprBxkfu5o/UJ-lqR7GsmI/AAAAAAAAHpo/VJzY7HMuXe0/s1600/The+Horse+in+Motion,+1878.%C2%A0Eadweard+Muybridge+(b.+9+April,+1830)The+first+movie+ever+made,+from+still+photographs..gif&lt;/a&gt;
&lt;a href="https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif" target="_blank" rel="noopener"&gt;https://upload.wikimedia.org/wikipedia/commons/0/07/The_Horse_in_Motion-anim.gif&lt;/a&gt;
&lt;a href="https://hackaday.com/wp-content/uploads/2018/04/saccades.gif?w=600&amp;amp;h=600" target="_blank" rel="noopener"&gt;https://hackaday.com/wp-content/uploads/2018/04/saccades.gif?w=600&amp;h=600&lt;/a&gt;
&lt;a href="http://38.media.tumblr.com/831aada3328557146e214efe1cb867a5/tumblr_mslrotKPS01snyrdto1_500.gif" target="_blank" rel="noopener"&gt;http://38.media.tumblr.com/831aada3328557146e214efe1cb867a5/tumblr_mslrotKPS01snyrdto1_500.gif&lt;/a&gt;
&lt;a href="https://www.filmsranked.com/wp-content/uploads/2020/05/two-fencers.gif%22" target="_blank" rel="noopener"&gt;https://www.filmsranked.com/wp-content/uploads/2020/05/two-fencers.gif"&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-temporal-discretization"&gt;Frame-Based Camera: Temporal discretization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/frame-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
From this representation, expressed in continuous time, we can &lt;em&gt;discretize&lt;/em&gt; time and measure the log intensity at regular time intervals. The difference between two images gives the &lt;em&gt;temporal resolution&lt;/em&gt;, and its inverse gives the number of images per second. This is the representation classically used in chronophotography, but also in all conventional video stream &lt;em&gt;acquisition and viewing&lt;/em&gt; technologies.
This technology is highly efficient for a wide range of signals. However, it does have certain &lt;em&gt;limitations&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-aliasing"&gt;Frame-Based Camera: Aliasing&lt;/h2&gt;
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/frames.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="85%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Let&amp;rsquo;s take the &lt;em&gt;example&lt;/em&gt; of three colored cubes rotating in a frontal axis along a circle. Because of temporal resolution and the length of time the shutter is open, the images captured at each instant can produce a certain amount of &lt;em&gt;blur&lt;/em&gt;, and movement can become increasingly difficult to estimate. If the movement of the cubes begins to accelerate, temporal &lt;em&gt;aliasing&lt;/em&gt; can be observed.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="frame-based-camera-wagon-wheel-illusion"&gt;Frame-Based Camera: Wagon-Wheel Illusion&lt;/h2&gt;
&lt;figure id="figure-sam-brinson-2020httpswwwsambrinsoncomnature-of-perception"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://vignette.wikia.nocookie.net/revengeristsconsortium/images/2/25/Whee.gif/revision/latest/scale-to-width-down/340?cb=20141209071330" alt="[[Sam Brinson, 2020](https://www.sambrinson.com/nature-of-perception/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://www.sambrinson.com/nature-of-perception/" target="_blank" rel="noopener"&gt;Sam Brinson, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This phenomenon is particularly striking when we look at a spinning &lt;em&gt;wheel&lt;/em&gt; at high speed, and this wheel&amp;rsquo;s rotational speed is such that two successive images give the illusion that the movement is in the opposite direction to the real, physical moment. It&amp;rsquo;s striking here in this car wheel, where you can perceive that the central hub appears motionless, and the wheel is perceived as turning in the &lt;em&gt;opposite direction&lt;/em&gt; to the physical rolling motion on the road.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-camera"&gt;Event-Based Camera&lt;/h1&gt;
&lt;aside class="notes"&gt;
Now let&amp;rsquo;s introduce the &lt;em&gt;event camera&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-1"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This consists of a conventional sensor which, like most CMOS-type sensors, transforms visual energy into an electric current. However, there are two fundamental differences, inspired by our knowledge of the retina, which is the sensor of vision. Firstly, each pixel of this sensor is &lt;em&gt;independent&lt;/em&gt; and is not cadenced according to a global clock. Secondly, each pixel will follow the evolution of the log intensity and signal an event when an increment or decrement exceeds a threshold. Let&amp;rsquo;s explain this mechanism in relation to our analog signal.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-2"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_1.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
First of all, the signal will evolve over time, and we can see here that it may cross a &lt;em&gt;threshold&lt;/em&gt;. An event will then be produced by this pixel. Here, the &lt;em&gt;event&lt;/em&gt; is of negative polarity, as it corresponds to an decrement.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-3"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_2.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-4"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_5.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Then, the signal will continue its course in time and cross a threshold again, possibly once more, at which point a new event will be produced. Here, we&amp;rsquo;re also seeing increments, ie positive polarizations.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-5"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_10.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-6"&gt;Event-Based Camera&lt;/h2&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_20.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
And so on, this simple mechanism will produce a &lt;em&gt;stream&lt;/em&gt; of events for each pixel, this &lt;em&gt;list&lt;/em&gt; being made up of the times of occurrence and the corresponding polarities.
&lt;/aside&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-7"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw_-1.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-8"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_raw.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Let&amp;rsquo;s show it now applied to the whole analog signal.
It&amp;rsquo;s worth noting in particular that, compared with frame-by-frame representations, this one is particularly &lt;em&gt;sparse&lt;/em&gt;: in particular, a signal with very few changes can be represented by just a few events. This is a very useful feature, not only because it saves &lt;em&gt;bandwidth&lt;/em&gt;, but also because it allows us to concentrate the &lt;em&gt;computations&lt;/em&gt; around the few events that represent the image. It&amp;rsquo;s also a fundamental feature of neuron function in the brain, and we&amp;rsquo;ll come back to it later.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-9"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;!--
&lt;figure id="figure-gregor-lenz-2020httpslenzgregorcompostsevent-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://lenzgregor.com/posts/event-cameras/post-rethinking/events.gif" alt="[[Gregor Lenz, 2020](https://lenzgregor.com/posts/event-cameras/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://lenzgregor.com/posts/event-cameras/" target="_blank" rel="noopener"&gt;Gregor Lenz, 2020&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Finally, we obtain a list of events for each pixels which can be &lt;em&gt;merged&lt;/em&gt; for the image as a whole, forming a list of events, including pixel addresses, times of occurrence and polarities. As they are generated over time, they are naturally arranged in order of occurrence. All these events are then transmitted in &lt;em&gt;real time&lt;/em&gt; to the output bus, typically by means of a USB3 connection. Note the analogy between this representation and the one made in the optic nerve that connects our retina to the rest of the brain: indeed, the million ganglion cells that make up the retina&amp;rsquo;s output emit action potentials, which are the only source of information that leaves the retina via the &lt;em&gt;optic nerve&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg" target="_blank" rel="noopener"&gt;https://www.researchgate.net/profile/Guido-Croon/publication/313221316/figure/fig2/AS:668997448134663@1536512829861/Picture-of-the-event-based-camera-employed-in-this-work-the-DVS_W640.jpg&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-10"&gt;Event-Based Camera&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sensor&lt;/th&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Framerate&lt;/th&gt;
&lt;th&gt;Resolution&lt;/th&gt;
&lt;th&gt;Power&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Human eye&lt;/td&gt;
&lt;td&gt;60 (?) dB&lt;/td&gt;
&lt;td&gt;300 (?) fps&lt;/td&gt;
&lt;td&gt;100 (?) Mpx&lt;/td&gt;
&lt;td&gt;10 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DSLR&lt;/td&gt;
&lt;td&gt;44.6 dB&lt;/td&gt;
&lt;td&gt;120 fps&lt;/td&gt;
&lt;td&gt;2&amp;ndash;20 Mpx&lt;/td&gt;
&lt;td&gt;30 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ultra-high speed&lt;/td&gt;
&lt;td&gt;64 dB&lt;/td&gt;
&lt;td&gt;10^4 fps&lt;/td&gt;
&lt;td&gt;0.3&amp;ndash;4 Mpx&lt;/td&gt;
&lt;td&gt;300 W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event-based&lt;/td&gt;
&lt;td&gt;120 dB&lt;/td&gt;
&lt;td&gt;10^6 fps&lt;/td&gt;
&lt;td&gt;0.1&amp;ndash;2 Mpx&lt;/td&gt;
&lt;td&gt;30 mW&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;There are several properties of event-driven cameras that make them remarkable. First of all, the &lt;em&gt;temporal precision&lt;/em&gt; of events is of the order of microseconds, enabling a theoretical frame rate of the order of a million images per second to be reached. This can be compared with a conventional camera, which is of the order of a hundred images per second, or with a high-speed camera, which can reach 10,000 images per second. It is difficult to estimate the sampling frequency of human perception, because while 25 frames per second is often sufficient for movie viewing, it has been shown that the human eye can distinguish temporal details up to 300 or even 1,000 frames per second. It&amp;rsquo;s worth noting that the &lt;em&gt;spatial resolution&lt;/em&gt; of these event cameras is often relatively modest, in the order of megapixels, but this is not a technical limitation, but rather due to the technological applications in which these cameras are commonly used. Compared with conventional cameras, which will consume several watts, event cameras consume very little electrical &lt;em&gt;energy&lt;/em&gt;, in the order of 10 milliwatts, a consumption equivalent to that of the human eye. Another important feature of these cameras is their ability to detect a very wide &lt;em&gt;range&lt;/em&gt; of luminosity, far exceeding that of conventional cameras at 120 dB (a factor of a million, compared with the human eye&amp;rsquo;s factor of 1 in a thousand between full moon and full sun),&lt;/p&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Event_camera#Functional_description" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Event_camera#Functional_description&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;more in &lt;a href="https://arxiv.org/pdf/1904.08405.pdf" target="_blank" rel="noopener"&gt;https://arxiv.org/pdf/1904.08405.pdf&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-11"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
This ability to &lt;em&gt;adapt&lt;/em&gt; to changing light conditions can be illustrated by going back to our analog signal and its event representation, and imagining. A typical example would be an autonomous car driving in daylight, entering and leaving a &lt;em&gt;tunnel&lt;/em&gt;, involving changes in brightness by a factor of several thousand.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="event-based-camera-12"&gt;Event-Based Camera&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/figures/raw/main/event-based/event-based_signal_low.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Here we have a division by a factor 8 of the signal in the middle section. It will be reported by a frame-based camera. In an event-based camera, this is represented here by a &lt;em&gt;sharp decrement&lt;/em&gt; in log intensity space and clearly indicated by events of negative polarity, but we can see that since this is a camera that uses log intensity, dividing the light signal produces the &lt;em&gt;same signal&lt;/em&gt; course over time, and therefore events that are identical. Event-driven cameras are therefore particularly well-suited to &lt;em&gt;dynamic signals&lt;/em&gt;, where the lighting context can change drastically.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-computer-vision"&gt;Event-Based Computer vision&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;These cameras therefore look very promising for future applications, particularly for embedded applications, but also for applications linked to scientific experiments. However, we can see that the image &lt;em&gt;representation&lt;/em&gt; is completely different, that is, we can no longer consider static images that follow one another at a regular rate, and for which we could have applied the algorithms that have been developed for decades in the field of &lt;em&gt;computer vision&lt;/em&gt;. We end up with a signal that corresponds to events that are transmitted as a stream from the camera. And we have to reinvent all computer vision algorithms to make them &lt;em&gt;event-driven&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;TODO: the process is active driven by the signal compared to acquired&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-recognition"&gt;Always-on Object Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/hots.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The first algorithm we developed with Antoine Grimaldi, who is a PhD student, and in collaboration with Sio Ieng and Ryad Benosman of Sorbonne University, who are recognized researchers in the development of this type of camera, is an improvement on an existing algorithm, &lt;em&gt;HOTS&lt;/em&gt;. This algorithm uses a relatively classical convolutional and hierarchical information processing architecture, which passes information &amp;ldquo;forward&amp;rdquo; from the camera and its event representation, and then through different processing layers to converge on a high-level representation that can be used for classification, in this case to recognize the identity of the digit presented as input, i.e. an eight digit. A fundamental feature of this algorithm is that it transforms the event representation into multiplexed, parallel channels, which analogously represent the temporal pattern of events, or &amp;ldquo;&lt;em&gt;temporal surface&lt;/em&gt;&amp;rdquo;. These are represented in the different layers by the individual temporal surfaces. An interesting feature of this algorithm is that learning in each of the layers is &lt;em&gt;unsupervised&lt;/em&gt;, which is a significant improvement over conventional deep learning algorithms that assume that a classification error signal can be back-propagated along the entire hierarchy, which is notoriously incorrect. Starting from this algorithm, we improved it by including neuro-biological knowledge, especially about the balance between different parallel communication pathways by including &lt;em&gt;homeostasis&lt;/em&gt; rules.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-gesture-recognition"&gt;Always-on Object Gesture Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_offline.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;To illustrate the results of our algorithm, we applied a classic camera dataset involving the classification of 10 different types of human &lt;em&gt;gestures&lt;/em&gt;. These biological movements are, for example, clapping hands, saying hello or a drum movement. The chance level is therefore at 10%, and we have observed that when all events have been processed, the &lt;em&gt;original&lt;/em&gt; algorithm achieves a performance of around 70%. By adding &lt;em&gt;homeostasis&lt;/em&gt;, we have reached a higher level of 82%, demonstrating the usefulness of using neuroscientific knowledge to improve machine learning algorithms.&lt;/p&gt;
&lt;p&gt;We also built on a fundamental characteristic of biological systems. In fact, this kind of algorithm is classically used to process the flow of events, but classification is only used as a last resort when all the events have been processed. We have modified the algorithm so that this classification can be done &lt;em&gt;online&lt;/em&gt;, in real time, event by event. In this way, processing in the various layers is triggered by the arrival of each event, which is propagated from the camera through all the layers to the classification layer.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="always-on-object-gesture-recognition-1"&gt;Always-on Object Gesture Recognition&lt;/h2&gt;
&lt;figure id="figure-grimaldi-boutin-sio-ieng-benosman--lp-2023httpslaurentperrinetgithubiopublicationgrimaldi-24"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-24/gesture_online.png" alt="[[Grimaldi, Boutin, Sio-Ieng, Benosman &amp; LP, 2023](https://laurentperrinet.github.io/publication/grimaldi-24/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-24/" target="_blank" rel="noopener"&gt;Grimaldi, Boutin, Sio-Ieng, Benosman &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
What&amp;rsquo;s more interesting is that we were also able to show the &lt;em&gt;evolution&lt;/em&gt; of the average performance obtained on a data set, and as a function of the number of events processed by the algorithm. The blue curve shows that if below 10 events, we remain at the level of chance, we then experience a gradual increase in performance that reaches the level of the original algorithm with ten thousand events, and exceeds this &lt;em&gt;performance&lt;/em&gt; when we have even 10 times more. A major advantage of this algorithm is that it can be asked to classify the nature of what it sees in its event camera, not once the entire signal has been processed by the system, but at any time. This characteristic is essential in biology. For example, imagine you&amp;rsquo;re on the savannah and a &lt;em&gt;lion&lt;/em&gt; jumps out at you. You won&amp;rsquo;t have the flexibility to wait for the video sequence to finish processing before making the right decision, which is to flee. Another variant in our algorithm consists of selecting the output classification events based on a calculation of the precision for each event. By using a &lt;em&gt;threshold&lt;/em&gt; on this precision, we can achieve a very good level of performance, with just a hundred events, and so achieve a characteristic that is common in biological networks, i.e. that a decision is not taken gradually, but emerges abruptly (here after 200 events) and then improves and stabilizes.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h1&gt;
&lt;aside class="notes"&gt;
We have therefore illustrated the use of &lt;em&gt;event-driven&lt;/em&gt; cameras on a particular algorithm. This algorithm has the particularity of processing the flow of events coming from the camera event by event, so that potentially each of these events triggers a cascade of mechanisms in the different processing layers, and thus enables a classification value to be updated at any given moment. This type of operation is characteristic of the way neurons work in the brain, i.e. using an event-based representation of information processing. This is what we call &lt;em&gt;spiking neural networks&lt;/em&gt;.
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-tonic-manualhttpstonicreadthedocsioenlatest_imagesneuron-modelspng"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://tonic.readthedocs.io/en/latest/_images/neuron-models.png" alt="[[Tonic manual](https://tonic.readthedocs.io/en/latest/_images/neuron-models.png)]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://tonic.readthedocs.io/en/latest/_images/neuron-models.png" target="_blank" rel="noopener"&gt;Tonic manual&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
Indeed, most neural networks used in deep learning use an analog representation. This is illustrated in this figure, which represents the various analog inputs to a formal neuron as they are linearly integrated by the synapses, then transformed by a non-linear function to generate an activation which is itself analog. This basic &lt;em&gt;perceptron&lt;/em&gt; principle is at the foundation of all existing neural networks, and in particular enables the construction of convolutional-type networks which are currently the champions for image classification, having outperformed human performance for several years. However, while this is true for static images, it can become prohibitively expensive with videos. This is why it can be interesting to use &lt;em&gt;spiking&lt;/em&gt; neurons instead, which, instead of receiving an analog input, will receive events that will trigger cascades of mechanisms in the neuronal cell, notably represented by the cell&amp;rsquo;s membrane potential. Typically, we&amp;rsquo;ll include a threshold for triggering action potential in this cell, which will generate new output events on the cell&amp;rsquo;s axon.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-lif-neuron"&gt;Spiking Neural Networks: LIF Neuron&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This is illustrated in this &lt;em&gt;animation&lt;/em&gt;, which shows how we can transform a list of input events by giving them different weights, and then &lt;em&gt;integrate&lt;/em&gt; them into the cell&amp;rsquo;s soma to generate output events.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-neuromorphic-hardware"&gt;Spiking Neural Networks: neuromorphic hardware&lt;/h2&gt;
&lt;figure id="figure-loihi-2"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg" alt="Loihi 2" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Loihi 2
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;This new type of representation represents a &lt;em&gt;paradigm shift&lt;/em&gt; in computation, in the same way that event-driven cameras have brought with them a paradigm shift in image representation. The development of these two new algorithms, which use impulse neural networks, is accompanied by the development of new neuromorphic chips, such as the Loihi 2 chip developed by Intel, which replaces a central computing unit with a massively parallelized &lt;em&gt;array&lt;/em&gt; of elementary event-driven computing units. As with event-driven cameras, this has the dual advantage of being very fast and consuming very little energy. Other types of &lt;em&gt;neuromorphic chips&lt;/em&gt; are currently being developed and may soon be used instead of conventional CPUs or GPUs.&lt;/p&gt;
&lt;figure id="figure-propheseehttpsdocspropheseeaistableconceptshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg" alt="[Prophesee](https://docs.prophesee.ai/stable/concepts.html)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://docs.prophesee.ai/stable/concepts.html" target="_blank" rel="noopener"&gt;Prophesee&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;p&gt;Loihi: &lt;a href="https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg" target="_blank" rel="noopener"&gt;https://d1fmx1rbmqrxrr.cloudfront.net/zdnet/optim/i/edit/ne/2019/Pierre%20temp/Intel%20Loihi__w630.jpg&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg?lossy=0&amp;amp;strip=none&amp;amp;ssl=1" target="_blank" rel="noopener"&gt;https://cdn.cnx-software.com/wp-content/uploads/2022/09/Intel-Loihi-2.jpg?lossy=0&amp;strip=none&amp;ssl=1&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Spiking neural networks therefore seem very promising for processing the output of event-driven cameras, but the study of &lt;em&gt;neurophysiology&lt;/em&gt; shows us that their operation can sometimes seem incongruous and far from the perceptron. In this first example, taken from an article by Mainen and Sejnowski from 1995, we see the response of the same neuron to several &lt;em&gt;repetitions&lt;/em&gt; of a stimulation in panel A. At the top, we see the membrane potential of this neuron in response to a 200 Pico ampere &lt;em&gt;current step&lt;/em&gt;, which shows that the membrane potential is not reproducible across different trials. This is illustrated by showing the spike response over time for the different trials, which shows a strong alignment at the start of stimulation, but that this diffuses little by little, so that after around 750 milliseconds there is no longer any coherence between the different trials. The situation is different in panel B, where the neuron is stimulated with &lt;em&gt;noise&lt;/em&gt;. In this case, the responses are so precise for the different trials that the membrane potential traces are overlapping almost exactly. The subtlety of this paper lies in its use of a &lt;em&gt;frozen&lt;/em&gt; noise, i.e. one that is repeated unchanged across trials. In this way, it demonstrates that neurons are not so much sensitive to analog values presented in the form of square pulses, but rather to dynamic signals for which they will respond with very high precision in the dynamic domain.&lt;/p&gt;
&lt;/aside&gt;
&lt;!--
---
## Spiking Neural Networks in neurobiology
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-1"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this other example, I show a simulation that reproduces the 1999 paper by Diesmann and colleagues. This &lt;em&gt;theoretical model&lt;/em&gt; considers ten groups of 100 neurons that are connected from group to group. An interesting property of this system is to show that for the same stimulation, i.e. for the same number of spikes, information can propagate from group to group only if it is sufficiently &lt;em&gt;concentrated in time&lt;/em&gt;. For the first two groups, the information is too dispersed in the first group and spreads progressively and increasingly in subsequent groups. Above a certain threshold, the information formed by a group of relatively synchronous spikes is correctly transmitted to the various groups in the network. This &lt;em&gt;non-linear&lt;/em&gt; behavior is one of the characteristics of spiking networks, giving them a certain richness, but also a certain complexity.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-2"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
A third example shows an experiment conducted by Rosa Cossart&amp;rsquo;s group at INMED and recently published by Haimerl and colleagues. It shows the results of &lt;em&gt;calcium fluorescence&lt;/em&gt; imaging recordings in mice. By arranging the different neurons in &lt;em&gt;temporal order of activation&lt;/em&gt;, it shows a sequential activation of these neurons, a mechanism which resembles the model mentioned earlier. These activation groups are strongly correlated with the &lt;em&gt;motor behavior&lt;/em&gt; of the mouse, as described in the graph at the top. Of particular interest is the fact that these sequences of activity are stable over time and can be recorded on a &lt;em&gt;subsequent day&lt;/em&gt;. This illustrates the importance of dynamics in the integration of neural computations.
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks-spiking-motifs"&gt;Spiking Neural Networks: Spiking motifs&lt;/h1&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;These observations have led us to &lt;em&gt;review&lt;/em&gt; neurobiological evidence around the existence of a neural representation that would use the relative time of spikes as a means of representing information. In particular, it is possible to use the conduction &lt;em&gt;delays&lt;/em&gt; that exist in the transmission of spikes from one neuron to another. It may seem paradoxical, but these delays are not simply a constraint, but can help to improve our ability to represent information by way of &lt;em&gt;spiking motifs&lt;/em&gt;.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-1"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If we consider, for example, this ultra-simplified network consisting of three presynaptic neurons and two output neurons connected by &lt;em&gt;heterogeneous&lt;/em&gt; delays, then we can see that a &lt;em&gt;synchronous&lt;/em&gt; input will generate membrane activity in the two output neurons at different times, so the threshold will never be reached, and these neurons will not produce an output impulse. On the other hand, if these delays are such that the action potentials converge on the neuron at the same instant, then these contributions will be able to sum up at the &lt;em&gt;same instant&lt;/em&gt; and produce an output spike, as denoted here by the red bar.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-2"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To better understand this mechanism, let&amp;rsquo;s return to our animation of a spiking neuron. Action potentials arrive at the neuron and are &lt;em&gt;immediately&lt;/em&gt; transmitted to the neuron&amp;rsquo;s cell body to be integrated and potentially generate a spike.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-3"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When using &lt;em&gt;heterogeneous&lt;/em&gt; delays, the situation is different, as the information will take a differential time to arrive or not at the neuron&amp;rsquo;s cell body. Note that if we include a particular &lt;em&gt;spiking motif&lt;/em&gt;, which we have here highlighted by green action potentials, then these converge at the same instant thanks to the delay. We will therefore have a detection in the neuron in the form of a new impulse.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
We used this theoretical principle in an algorithm for detecting movement in an image. To do this, we first generated event data using natural images that are set in motion along trajectories that resemble those produced by free exploration of the visual scene. You&amp;rsquo;ll notice several features of the event-driven output, such as the fact that faster motion generates more spikes, or that edges oriented parallel to one direction produce few changes, and therefore little spike output - the so-called aperture problem.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-1"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://raw.githubusercontent.com/laurentperrinet/figures/7f382a8074552de1a6a0c5728c60d48788b5a9f8/animated_neurons/conv_HDSNN.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We then used a neural network with a classical architecture, which we enhanced by using an impulse representation that takes into account different possible synaptic delays. In this figure, we have represented the input in the left grid, which represents the occurrence of spikes of positive or negative polarity. Then we have represented different processing channels denoted by the colors green and orange, which are applied to this input to produce membrane activity. As illustrated above, this activity will produce output pulses, notably in synaptic connection nuclei, with heterogeneous delays corresponding to the detection of precise spatio-temporal patterns.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-2"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One advantage of this network is that it is differentiable, enabling us to apply classical machine learning methods, notably supervised learning. We then see the emergence of different convolution kernels, and here I represent a subset of its kernels for different directions, as denoted by the red arrows on the left of the graph. It shows the kernels obtained on the spatial representation according to the different columns, and each row represents the different delays from a delay of one on the right to a delay of 12 time steps on the left. Detectors that follow the motion emerge. For example, for the top line from top to bottom. These kernels integrate both positive neurons in red and negative polarity inputs in blue.
Such spatio-temporal filtering is observed in neurobiology, but to my knowledge had never been observed in a model of spiking neurons trained under natural conditions.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-3"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_raw.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We will now study the performance of this network in detecting motion in the flow of events entering the network. When we use all the weights of the convolution kernel, we get a very good performance of the order of 99%, represented by the black dot in the top right-hand corner. Note that in the kernels we&amp;rsquo;ve seen emerge, most of the synaptic weights are close to zero, so we might consider removing some of these weights, as this can be shown to reduce the number of event calculations required.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-4"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy_shortening.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;strong&gt;2 MINUTE&lt;/strong&gt;
This is what we&amp;rsquo;ve done, by first removing the parts of the core corresponding to the longest delays. This &amp;ldquo;shortens&amp;rdquo; the kernel. We quickly observed a degradation in performance, which reached half-saturation when we reduced the number of weights by around 50%. This demonstrates the importance of integrating information that is quite distant and structured over time.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-hd-snn-5"&gt;Spiking Neural Networks: HD-SNN&lt;/h2&gt;
&lt;figure id="figure-grimaldi--lp-2023-biol-cyberneticshttpslaurentperrinetgithubiopublicationgrimaldi-23-bc"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/quant_accuracy.svg" alt="[Grimaldi &amp; LP (2023) Biol Cybernetics](https://laurentperrinet.github.io/publication/grimaldi-23-bc/)" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener"&gt;Grimaldi &amp;amp; LP (2023) Biol Cybernetics&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In a second step, we performed a pruning operation, which consists in progressively removing the weights that are the weakest. This time, performance remains optimal over a wide compression range, and we reach half-saturation when we have removed around 99.8% of the weights. This means that the network is able to maintain very good performance, even when only one weight out of 600 has been kept, and therefore, with a computation time increased by a factor of 600. This property, which we didn&amp;rsquo;t expect, seems promising for creating machine learning algorithms that are less energy-hungry.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="event-based-vision-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-09-08_fresnel/?transition=fade" target="_blank" rel="noopener"&gt;Event-based vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-séminaire-institut-fresnel-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-09-08-fresnel" target="_blank" rel="noopener"&gt;[2023-09-08]&lt;/a&gt; &lt;a href="https://www.fresnel.fr/spip/spip.php?article2453&amp;amp;lang=fr" target="_blank" rel="noopener"&gt;Séminaire institut Fresnel&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;!-- &lt;img src="https://laurentperrinet.github.io/talk/2023-09-08_fresnel/qrcode.png" alt="qrcode" height="130"/&gt; --&gt;
&lt;p&gt;&lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
In conclusion, we have seen that event-driven cameras open the door to new applications that mimic the performance of the human eye, in terms of computational dynamics, adaptation to light conditions and energy constraints. This technological development has recently been accompanied by the development of neuromorphic chips and innovative algorithms in the form of spiking neural networks. However, there is still a great deal of progress to be made at theoretical level, particularly in the understanding of these spiking neural networks, and we have shown the potential progress that can be made by exploiting the richness of temporal representations, particularly by taking advantage of heterogeneous delays.
Beyond these particular applications to natural image processing, I hope to have succeeded in demonstrating the importance of cross-fertilizing the field of engineering applications in general with biological neuroscience. This new line of research - known as NeuroAI or, more generally, as computational neuroscience - is likely to develop over the next few years. Thank you for your attention.
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2023-05-10-phd-program_neurosciences-computationnelles.md</title><link>https://laurentperrinet.github.io/slides/2023-05-10-phd-program_neurosciences-computationnelles/</link><pubDate>Wed, 10 May 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-05-10-phd-program_neurosciences-computationnelles/</guid><description>&lt;section&gt;
&lt;h1 id="interactions-between-machine-learning-artificial-neural-networks-and-our-understanding-of-biological-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-05-10-phd-program_neurosciences-computationnelles/?transition=fade" target="_blank" rel="noopener"&gt;Interactions between machine learning, artificial neural networks and our understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-neuroschool-phd-program-in-neuroscience-computation-neuroscience"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles" target="_blank" rel="noopener"&gt;[2023-05-10]&lt;/a&gt; &lt;a href="https://neuro-marseille.org/en/training/phd-program/" target="_blank" rel="noopener"&gt;NeuroSchool PhD Program in Neuroscience&lt;/a&gt;: Computation Neuroscience&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;img src="https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles/qrcode.png" alt="qrcode" height="130"/&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;!-- ![logo](https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg)
![QR code](https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles/qrcode.png) --&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;welcome to the course on COMPUTATIONAL NEUROSCIENCE 2023 entitled &amp;ldquo;Machine learning to analyze complex data&amp;rdquo;&lt;/li&gt;
&lt;li&gt;objective= understand models of biological vision which are the inspiration for modern deep learning&lt;/li&gt;
&lt;li&gt;outcome= interaction between artificial and natural NNs&lt;/li&gt;
&lt;li&gt;outline= principles / CNNs / challenges / solutions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="principles-of-vision"&gt;Principles of Vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;break down problem in three different levels: Marr (+ Poggio)&lt;/li&gt;
&lt;li&gt;arbitrary, but useful division of labor&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-ilya-repin-1884httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[An Unexpected Visitor (Ilya Repin, 1884)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Ilya Repin, 1884)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;seeing= interacting with the visual world&lt;/li&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-1"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[An Unexpected Visitor (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: the eye is always moving&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Alfred_Iarbous" target="_blank" rel="noopener"&gt;https://fr.wikipedia.org/wiki/Alfred_Iarbous&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;1) examine the painting freely&amp;rdquo;&lt;/li&gt;
&lt;li&gt;consistency of eye traces / interindividual differences&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-2"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---age-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_004.jpg" alt="[An Unexpected Visitor - *Age?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: depends on task:&lt;/li&gt;
&lt;li&gt;&amp;ldquo;3) assess the ages of the characters&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-3"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---how-long-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_007.jpg" alt="[An Unexpected Visitor - *How long?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;How long?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;6) surmise how long the “unexpected visitor” had been away&amp;rdquo;&lt;/li&gt;
&lt;li&gt;adaptive and efficient system&amp;hellip;&lt;/li&gt;
&lt;li&gt;yet, surprisingly&amp;hellip;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;the visual system experiences &amp;ldquo;hallucinations&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae, 1976, *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae, 1976, &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;these hallucinations may appear to be&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;real&lt;/li&gt;
&lt;li&gt;persistent&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae, 2007, *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae, 2007, &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
in that specific case&amp;hellip;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae, 2007, *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae, 2007, &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;more date = less ambiguity&lt;/li&gt;
&lt;li&gt;beware: models may also hallucinate&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-context"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;: Context&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;these may be of low level&lt;/li&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-context-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;: Context&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-context-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;: Context&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;of showing an effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="principles-of-vision-1"&gt;Principles of vision?&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="computational-neuroscience-of-vision-1"&gt;Computational neuroscience of vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland, 1998](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland, 1998&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-human-visual-system"&gt;Anatomy of the Human Visual system&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="human-visual-system--the-hmax-model"&gt;Human Visual system : the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--hierarchy"&gt;Convolutional Neural Networks : Hierarchy&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks-cnns"&gt;Convolutional Neural Networks (CNNs)&lt;/h2&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;!-- ---
## Anatomy of the Human Visual system
&lt;figure id="figure-wikipediahttpsenwikipediaorgwikivisual_system"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/e/e4/Voies_visuelles3.svg" alt="[[Wikipedia]](https://en.wikipedia.org/wiki/Visual_system)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Visual_system" target="_blank" rel="noopener"&gt;[Wikipedia]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-hubel--wiesel"&gt;Primary visual cortex: Hubel &amp;amp; Wiesel&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-hubel--wiesel-1"&gt;Primary visual cortex: Hubel &amp;amp; Wiesel&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles/hubel_wiesel.webm" type="video/webm"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962] - from &lt;a href="https://www.youtube.com/@Neuroslicer" target="_blank" rel="noopener"&gt;@Neuroslicer&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=KE952yueVLA" target="_blank" rel="noopener"&gt;https://www.youtube.com/watch?v=KE952yueVLA&lt;/a&gt; -
&lt;a href="https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles/hubel_wiesel.webm" target="_blank" rel="noopener"&gt;https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles/hubel_wiesel.webm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;simple cell 4:09&lt;/li&gt;
&lt;li&gt;excerpt &lt;a href="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" target="_blank" rel="noopener"&gt;https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--hierarchy-1"&gt;Convolutional Neural Networks : hierarchy&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;backpropagation is not bioplausible&lt;/li&gt;
&lt;li&gt;modification&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;One-dimensional &lt;a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" target="_blank" rel="noopener"&gt;discrete convolution&lt;/a&gt; (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-1"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-2"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cross-correlation&lt;/strong&gt; of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-3"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-4"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of an image defined on several channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-5"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of a multi-channel image for multiple output channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[k, x, y] = \sum_{c=1}^{C} \sum_{i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--predictive-coding"&gt;Convolutional Neural Networks : Predictive coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--predictive-coding-1"&gt;Convolutional Neural Networks : Predictive coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--topography"&gt;Convolutional Neural Networks : Topography&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--topography-1"&gt;Convolutional Neural Networks : Topography&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision-2"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h1 id="dynamics-of-vision"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!--
---
## Dynamics of vision
&lt;figure id="figure-thorpe-2001httpslaurentperrinetgithubio2022-01-12_neurocercle21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/scheme_thorpe.jpg" alt="[[Thorpe, 2001]](https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1" target="_blank" rel="noopener"&gt;[Thorpe, 2001]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;!--
---
## Dynamics of vision
&lt;figure id="figure-precise-spiking-motifs-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency-estimate.jpg" alt="Precise Spiking Motifs] ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Precise Spiking Motifs] (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-1"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency_bg.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In particular in our group, we are interested in dynamics of neural processing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the process of categorizing an object takes 10 layers&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-2"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-3"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-4"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston, 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-5"&gt;Dynamics of vision&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-6"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-7"&gt;Dynamics of vision&lt;/h2&gt;
&lt;!--
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Flash-lag effect: MBP (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;)&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-8"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-leaky-integrate-and-fire-neuron"&gt;Spiking Neural Networks: Leaky Integrate-and-Fire Neuron&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-1"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-2"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-3"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Izhikevich polychronization&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;yet the domain is vast, and there s lot to do in SNNs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-grimaldi-et-al-2023-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="[Grimaldi *et al*, 2023, [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Grimaldi &lt;em&gt;et al&lt;/em&gt;, 2023, &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-1"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-2"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;event-based cameras&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-1"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-2"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-3"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nice kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-4"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="interactions-between-machine-learning-artificial-neural-networks-and-our-understanding-of-biological-vision-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-05-10-phd-program_neurosciences-computationnelles/?transition=fade" target="_blank" rel="noopener"&gt;Interactions between machine learning, artificial neural networks and our understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-neuroschool-phd-program-in-neuroscience-computation-neuroscience-1"&gt;&lt;u&gt;&lt;a href="https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles" target="_blank" rel="noopener"&gt;[2023-05-10]&lt;/a&gt; &lt;a href="https://neuro-marseille.org/en/training/phd-program/" target="_blank" rel="noopener"&gt;NeuroSchool PhD Program in Neuroscience&lt;/a&gt;: Computation Neuroscience&lt;/u&gt;&lt;/h4&gt;
&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logos" height="130"/&gt;
&lt;img src="https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles/qrcode.png" alt="qrcode" height="130"/&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;!-- ![logo](https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg)
![QR code](https://laurentperrinet.github.io/talk/2023-05-10-phd-program-neurosciences-computationnelles/qrcode.png) --&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;thanks for your attention&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2023-04-05-ue-neurosciences-computationnelles</title><link>https://laurentperrinet.github.io/slides/2023-04-05-ue-neurosciences-computationnelles/</link><pubDate>Wed, 05 Apr 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-04-05-ue-neurosciences-computationnelles/</guid><description>&lt;section&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-04-05-ue-neurosciences-computationnelles/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-master-1-neurosciences-et-sciences-cognitives"&gt;&lt;u&gt;&lt;a href="https://ametice.univ-amu.fr/course/view.php?id=95116" target="_blank" rel="noopener"&gt;[2023-04-05]&lt;/a&gt; &lt;a href="https://sciences.univ-amu.fr/fr/formation/masters/master-neurosciences" target="_blank" rel="noopener"&gt;Master 1 Neurosciences et Sciences Cognitives.&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&lt;/li&gt;
&lt;li&gt;outline&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="principles-of-vision"&gt;Principles of Vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;cut in different levels: Marr (+ Poggio)&lt;/li&gt;
&lt;li&gt;arbitrary, but useful division of labor&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-ilya-repin-1884httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[An Unexpected Visitor (Ilya Repin, 1884)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Ilya Repin, 1884)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;seeing= interacting with the visual world&lt;/li&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-1"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[An Unexpected Visitor (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: the eye is always moving&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Alfred_Iarbous" target="_blank" rel="noopener"&gt;https://fr.wikipedia.org/wiki/Alfred_Iarbous&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-2"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---age-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_003.jpg" alt="[An Unexpected Visitor - *Age?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: depends on task&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-3"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---how-long-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_006.jpg" alt="[An Unexpected Visitor - *How long?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;How long?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;consistency of eye traces&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="principles-of-vision-1"&gt;Principles of vision?&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="computational-neuroscience-of-vision-1"&gt;Computational neuroscience of vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-human-visual-system"&gt;Anatomy of the Human Visual system&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="human-visual-system--the-hmax-model"&gt;Human Visual system : the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2007httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2007](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)]" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;Serre and Poggio, 2007&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!-- ---
## Anatomy of the Human Visual system
&lt;figure id="figure-wikipediahttpsenwikipediaorgwikivisual_system"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/e/e4/Voies_visuelles3.svg" alt="[[Wikipedia]](https://en.wikipedia.org/wiki/Visual_system)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Visual_system" target="_blank" rel="noopener"&gt;[Wikipedia]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-hubel--wiesel"&gt;Primary visual cortex: Hubel &amp;amp; Wiesel&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-hubel--wiesel-1"&gt;Primary visual cortex: Hubel &amp;amp; Wiesel&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--hierarchy"&gt;Convolutional Neural Networks : Hierarchy&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;One-dimensional &lt;a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" target="_blank" rel="noopener"&gt;discrete convolution&lt;/a&gt; (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-1"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-2"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cross-correlation&lt;/strong&gt; of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-3"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-4"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of an image defined on several channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-5"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of a multi-channel image for multiple output channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[k, x, y] = \sum_{c=1}^{C} \sum_{i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--the-hmax-model"&gt;Convolutional Neural Networks : the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks-cnns"&gt;Convolutional Neural Networks (CNNs)&lt;/h2&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--hierarchy-1"&gt;Convolutional Neural Networks : hierarchy&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;backpropagation is not bioplausible&lt;/li&gt;
&lt;li&gt;modification&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--predictive-coding"&gt;Convolutional Neural Networks : Predictive coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--predictive-coding-1"&gt;Convolutional Neural Networks : Predictive coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--topography"&gt;Convolutional Neural Networks : Topography&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--topography-1"&gt;Convolutional Neural Networks : Topography&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision-2"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h1 id="dynamics-of-vision"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!--
---
## Dynamics of vision
&lt;figure id="figure-thorpe-2001httpslaurentperrinetgithubio2022-01-12_neurocercle21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/scheme_thorpe.jpg" alt="[[Thorpe (2001)]](https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1" target="_blank" rel="noopener"&gt;[Thorpe (2001)]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;!--
---
## Dynamics of vision
&lt;figure id="figure-precise-spiking-motifs-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency-estimate.jpg" alt="Precise Spiking Motifs] ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Precise Spiking Motifs] (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-1"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency_bg.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In particular in our group, we are interested in dynamics of neural processing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the process of categorizing an object takes 10 layers&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-2"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-3"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-4"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston, 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-5"&gt;Dynamics of vision&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-6"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-7"&gt;Dynamics of vision&lt;/h2&gt;
&lt;!--
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Flash-lag effect: MBP (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;)&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-8"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-leaky-integrate-and-fire-neuron"&gt;Spiking Neural Networks: Leaky Integrate-and-Fire Neuron&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-1"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-2"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-3"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Izhikevich polychronization&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;yet the domain is vast, and there s lot to do in SNNs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-1"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-2"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;event-based cameras&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-1"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-2"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-3"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nice kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-4"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision-1"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision-2"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-04-05-ue-neurosciences-computationnelles/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-master-1-neurosciences-et-sciences-cognitives-1"&gt;&lt;u&gt;&lt;a href="https://ametice.univ-amu.fr/course/view.php?id=95116" target="_blank" rel="noopener"&gt;[2023-04-05]&lt;/a&gt; &lt;a href="https://sciences.univ-amu.fr/fr/formation/masters/master-neurosciences" target="_blank" rel="noopener"&gt;Master 1 Neurosciences et Sciences Cognitives.&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&lt;/li&gt;
&lt;li&gt;outline&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;</description></item><item><title>2023-04-03-master-m-4-nc</title><link>https://laurentperrinet.github.io/slides/2023-04-03-master-m-4-nc/</link><pubDate>Mon, 03 Apr 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-04-03-master-m-4-nc/</guid><description>&lt;section&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-04-03-master-m-4-nc/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-master-m4nc-de-l"&gt;&lt;u&gt;&lt;a href="https://ametice.univ-amu.fr/pluginfile.php/5559779/mod_resource/content/1/Planning_Neurocomp_M1_2022.pdf" target="_blank" rel="noopener"&gt;[2023-04-03]&lt;/a&gt; &lt;a href="https://neuromod.univ-cotedazur.eu" target="_blank" rel="noopener"&gt;Master M4NC de l&amp;rsquo;institut NeuroMod, cours Prospective Innovation and Research.&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;objective= understand biological vision&lt;/li&gt;
&lt;li&gt;interaction between artificial and natural NNs&lt;/li&gt;
&lt;li&gt;outline&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="principles-of-vision"&gt;Principles of Vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;cut in different levels: Marr (+ Poggio)&lt;/li&gt;
&lt;li&gt;arbitrary, but useful division of labor&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-ilya-repin-1884httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[An Unexpected Visitor (Ilya Repin, 1884)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Ilya Repin, 1884)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;seeing= interacting with the visual world&lt;/li&gt;
&lt;li&gt;social animals: looking at emotions&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-1"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[An Unexpected Visitor (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: the eye is always moving&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fr.wikipedia.org/wiki/Alfred_Iarbous" target="_blank" rel="noopener"&gt;https://fr.wikipedia.org/wiki/Alfred_Iarbous&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-2"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---age-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_003.jpg" alt="[An Unexpected Visitor - *Age?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;active: depends on task&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="what-is-the-function-of-vision-3"&gt;What is the function of vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---how-long-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_006.jpg" alt="[An Unexpected Visitor - *How long?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;How long?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;consistency of eye traces&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;effect of context -&amp;gt; 3D&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsenwikipediaorgwikicydonia_mars"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://en.wikipedia.org/wiki/Cydonia_(Mars))" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cydonia_%28Mars%29" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="principles-of-vision-1"&gt;Principles of vision?&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="computational-neuroscience-of-vision-1"&gt;Computational neuroscience of vision&lt;/h2&gt;
&lt;figure id="figure-sejnowski-koch--churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="anatomy-of-the-human-visual-system"&gt;Anatomy of the Human Visual system&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.readkong.com/static/06/b0/06b09f0235ae7fcf29438ce317c10e60/optogenetic-visual-cortical-prosthesis-9612386-7.jpg" alt="" loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="human-visual-system--the-hmax-model"&gt;Human Visual system : the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2007httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2007](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)]" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;Serre and Poggio, 2007&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!-- ---
## Anatomy of the Human Visual system
&lt;figure id="figure-wikipediahttpsenwikipediaorgwikivisual_system"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/e/e4/Voies_visuelles3.svg" alt="[[Wikipedia]](https://en.wikipedia.org/wiki/Visual_system)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Visual_system" target="_blank" rel="noopener"&gt;[Wikipedia]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-hubel--wiesel"&gt;Primary visual cortex: Hubel &amp;amp; Wiesel&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="primary-visual-cortex-hubel--wiesel-1"&gt;Primary visual cortex: Hubel &amp;amp; Wiesel&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--hierarchy"&gt;Convolutional Neural Networks : Hierarchy&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;One-dimensional &lt;a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" target="_blank" rel="noopener"&gt;discrete convolution&lt;/a&gt; (eg in time) with a kernel $g$ of radius $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[n-m] \cdot g[m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-1"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x-i, y-j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-2"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cross-correlation&lt;/strong&gt; of an image (two-dimensional) with a kernel $g$ of radius $K\times K$:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[x+i, y+j] \cdot g[i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-3"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-4"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of an image defined on several channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[x, y] = \sum_{c=1}^{C} \sum_{i,j} f[c, x+i, y+j] \cdot g[c, i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--mathematics-5"&gt;Convolutional Neural Networks : Mathematics&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Correlation of a multi-channel image for multiple output channels (note &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;the order of the indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast \tilde{g})[k, x, y] = \sum_{c=1}^{C} \sum_{i,j} f[c, x+i, y+j] \cdot g[k, c, i, j]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--the-hmax-model"&gt;Convolutional Neural Networks : the HMAX model&lt;/h2&gt;
&lt;figure id="figure-serre-and-poggio-2006httpsbiologystackexchangecomquestions10955ventral-stream-pathway-and-architecture-proposed-by-poggios-group"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.stack.imgur.com/ZlFnp.png" alt="[[Serre and Poggio, 2006]](https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group)" loading="lazy" data-zoomable width="65%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://biology.stackexchange.com/questions/10955/ventral-stream-pathway-and-architecture-proposed-by-poggios-group" target="_blank" rel="noopener"&gt;[Serre and Poggio, 2006]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;sota&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks-cnns"&gt;Convolutional Neural Networks (CNNs)&lt;/h2&gt;
&lt;figure id="figure-jérémie--lp-2023httpslaurentperrinetgithubiopublicationjeremie-23-ultra-fast-cat"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.mdpi.com/vision/vision-07-00029/article_deploy/html/images/vision-07-00029-g003.png" alt="[[Jérémie &amp; LP, 2023](https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/jeremie-23-ultra-fast-cat/" target="_blank" rel="noopener"&gt;Jérémie &amp;amp; LP, 2023&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--hierarchy-1"&gt;Convolutional Neural Networks : hierarchy&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;backpropagation is not bioplausible&lt;/li&gt;
&lt;li&gt;modification&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--predictive-coding"&gt;Convolutional Neural Networks : Predictive coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;adding sparse coding + feedback&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--predictive-coding-1"&gt;Convolutional Neural Networks : Predictive coding&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/BoutinFranciosiniChavaneRuffierPerrinet20face.png" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;interpretable features&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--topography"&gt;Convolutional Neural Networks : Topography&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="convolutional-neural-networks--topography-1"&gt;Convolutional Neural Networks : Topography&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2022httpslaurentperrinetgithubiopublicationfranciosini-21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/franciosini-21/featured.jpg" alt="[[Boutin *et al*, 2022](https://laurentperrinet.github.io/publication/franciosini-21/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/franciosini-21/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2022&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="computational-neuroscience-of-vision-2"&gt;Computational neuroscience of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h1 id="dynamics-of-vision"&gt;Dynamics of vision&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!--
---
## Dynamics of vision
&lt;figure id="figure-thorpe-2001httpslaurentperrinetgithubio2022-01-12_neurocercle21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/scheme_thorpe.jpg" alt="[[Thorpe (2001)]](https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1" target="_blank" rel="noopener"&gt;[Thorpe (2001)]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;!--
---
## Dynamics of vision
&lt;figure id="figure-precise-spiking-motifs-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency-estimate.jpg" alt="Precise Spiking Motifs] ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Precise Spiking Motifs] (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-1"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency_bg.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In particular in our group, we are interested in dynamics of neural processing&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the process of categorizing an object takes 10 layers&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-2"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-visual-latencies-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" alt="Visual latencies ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual latencies (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;1 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the latencies are of similar in the human brain but merely scaled due to the brain size&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-3"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-4"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-sensorimotor-delays-perrinet--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Sensorimotor delays ([Perrinet &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))" loading="lazy" data-zoomable width="75%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Sensorimotor delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet &amp;amp; Friston, 2014&lt;/a&gt;)
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-5"&gt;Dynamics of vision&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-6"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-7"&gt;Dynamics of vision&lt;/h2&gt;
&lt;!--
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Flash-lag effect: MBP (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;)&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="dynamics-of-vision-8"&gt;Dynamics of vision&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h1 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h1&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-leaky-integrate-and-fire-neuron"&gt;Spiking Neural Networks: Leaky Integrate-and-Fire Neuron&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://i.sstatic.net/ixnrz.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-1"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-mainen--sejnowski-1995httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_2_mainensejnowski1995ipynb"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" alt="[[Mainen &amp; Sejnowski, 1995](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener"&gt;Mainen &amp;amp; Sejnowski, 1995&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reproduucibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-2"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-diesmann-et-al-1999httpsgithubcomspikeai2022_polychronies-reviewblobmainsrcfigure_3_diesmann_et_al_1999py"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" alt="[[Diesmann et al. 1999](https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener"&gt;Diesmann et al. 1999&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neurobiology-3"&gt;Spiking Neural Networks in neurobiology&lt;/h2&gt;
&lt;figure id="figure-haimerl-et-al-2019httpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" alt="[[Haimerl et al, 2019](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)]" loading="lazy" data-zoomable width="99%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Haimerl et al, 2019&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Izhikevich polychronization&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;yet the domain is vast, and there s lot to do in SNNs&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-1"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/LIF.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A standard LIF&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-spiking-motifs-2"&gt;Spiking Neural Networks: Spiking motifs&lt;/h2&gt;
&lt;figure id="figure-review-on-precise-spiking-motifshttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/HSD.gif" alt="Review on [Precise Spiking Motifs](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/)." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Review on &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;Precise Spiking Motifs&lt;/a&gt;.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;event-based cameras&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-1"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/HDSNN_conv.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-2"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/FastMotionDetection_input.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A nice HSD neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For instance, we show how precise spike times may be used to detect the direction of motion from such a stream of events in an ultrafast fashion.&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-3"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/motion_kernels.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nice kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks-in-neuromorphic-engineering-4"&gt;Spiking Neural Networks in neuromorphic engineering&lt;/h2&gt;
&lt;figure id="figure-the-hd-snn-neural-network"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-23-bc/accuracy.png" alt="The HD-SNN neural network." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
The HD-SNN neural network.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;strong&gt;2 MINUTE&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;frugal computing&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision-1"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/h2&gt;
&lt;figure id="figure-marr-1982httpsoutdexyz2020-01-12overappreciated-arguments-marrs-three-levelshtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://outde.xyz/img/Rawski/Marr/3Lvls.jpg" alt="[[Marr, 1982](https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://outde.xyz/2020-01-12/overappreciated-arguments-marrs-three-levels.html" target="_blank" rel="noopener"&gt;Marr, 1982&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;more on &lt;a href="https://raw.githubusercontent.com/wowchemy/starter-hugo-academic/master/exampleSite/content/slides/example/index.md" target="_blank" rel="noopener"&gt;doc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="artificial-neural-networks-and-machine-learning-applied-to-the-understanding-of-biological-vision-2"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2023-04-03-master-m-4-nc/?transition=fade" target="_blank" rel="noopener"&gt;Artificial neural networks and machine learning applied to the understanding of biological vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-master-m4nc-de-l-1"&gt;&lt;u&gt;&lt;a href="https://ametice.univ-amu.fr/pluginfile.php/5559779/mod_resource/content/1/Planning_Neurocomp_M1_2022.pdf" target="_blank" rel="noopener"&gt;[2023-04-03]&lt;/a&gt; &lt;a href="https://neuromod.univ-cotedazur.eu" target="_blank" rel="noopener"&gt;Master M4NC de l&amp;rsquo;institut NeuroMod, cours Prospective Innovation and Research.&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;/section&gt;</description></item><item><title>2023-01-23_game-theory-and-the-brain</title><link>https://laurentperrinet.github.io/slides/2023-01-23_game-theory-and-the-brain/</link><pubDate>Mon, 23 Jan 2023 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2023-01-23_game-theory-and-the-brain/</guid><description>&lt;h1 id="game-theory-and-brain-strategies"&gt;Game theory and brain strategies&lt;/h1&gt;
&lt;img src="https://laurentperrinet.github.io/publication/perrinet-21-hasard/featured.jpg" width="50%" &gt;
&lt;p&gt;&lt;strong&gt;[2023-01-23] Atelier jeu et cerveau&lt;/strong&gt;&lt;/p&gt;
&lt;p style="color:blue;font-size:25px;"&gt;
&lt;a href="https://laurentperrinet.github.io/talk/2023-01-23-game-theory-and-the-brain"&gt;https://laurentperrinet.github.io/talk/2023-01-23-game-theory-and-the-brain&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;li&gt;Photo by Naser Tamimi on Unsplash &lt;a href="https://unsplash.com/fr/photos/yG9pCqSOrAg" target="_blank" rel="noopener"&gt;https://unsplash.com/fr/photos/yG9pCqSOrAg&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="game-theory-and-brain-strategies-1"&gt;Game theory and brain strategies&lt;/h1&gt;
&lt;img src="https://laurentperrinet.github.io/publication/perrinet-21-hasard/featured.jpg" width="80%" &gt;
&lt;p&gt;&lt;a href="https://theconversation.com/le-jeu-du-cerveau-et-du-hasard-159388"&gt;Le jeu du cerveau et du hasard, &lt;i&gt;The Conversation&lt;/i&gt;&lt;/a&gt;&lt;/p&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;What is noise? The uncertainty due to noise is symbolized by dices: a throw of fair dices, even if they are optimally simulated can not be predicted: the outcome is uniformly one facet from 1 to 6,&lt;/li&gt;
&lt;li&gt;I am interested in vision, and uncertainty exists in different forms,&lt;/li&gt;
&lt;li&gt;If we consider the image, can be noise at low contrast, complexity of the object, pose of the dice,&lt;/li&gt;
&lt;li&gt;in this presentation, we will see different facets of noise and uncertainty, and illustrate how our brains may play with it - and delineate a theory for this game. We will also see how it may harness the noise by explicitly representing it in the neural activity.&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="aleatoric-noise"&gt;Aleatoric noise&lt;/h1&gt;
&lt;hr&gt;
&lt;!--
&lt;figure id="figure-random-points--a"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://a5huynh.github.io/img/2019/rng-example.png" alt="Random points (A)." loading="lazy" data-zoomable width="49%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Random points (A).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;figure id="figure-random-points--b"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://a5huynh.github.io/img/2019/poisson-disk-example.png" alt="Random points (B)." loading="lazy" data-zoomable width="49%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Random points (B).
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;img src="https://a5huynh.github.io/img/2019/rng-example.png" width="70%" &gt;
&lt;img src="https://a5huynh.github.io/img/2019/poisson-disk-example.png" width="70%" &gt;
&lt;p&gt;&lt;a href="https://a5huynh.github.io/posts/2019/poisson-disk-sampling/" target="_blank" rel="noopener"&gt;A Huynh, generating Poisson disk noise&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;what is noise? it exists at quantum level, but if I were to ask you to draw random points how would it look like?&lt;/li&gt;
&lt;li&gt;Aleatoric comes from alea, the Latin word for “dice.” Aleatoric uncertainty is the uncertainty introduced by the randomness of an event. For example, the result of flipping a coin is an aleatoric event.&lt;/li&gt;
&lt;li&gt;In your opinion, which of the two is the most random pattern?&lt;/li&gt;
&lt;li&gt;from your responses &amp;hellip;&lt;/li&gt;
&lt;li&gt;the answer is that &amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When it comes to true randomness, one of its stranger aspects is that it often behaves differently to people’s expectations. Take the two diagrams below – which one do you think is a random distribution, and which has been deliberately created/adjusted?&lt;/p&gt;
&lt;p&gt;randomized dots
Only one of these panels shows a random distribution of dots | Source: Bully for Brontosaurus – Stephen Jay Gould&lt;/p&gt;
&lt;p&gt;If you said the right panel, you are in good company, as this is most people’s expectation of what randomness looks like. However, this relatively uniform distribution has been adjusted to ensure the dots are evenly spread. In fact, it is the left panel, with its clumps and voids, that reflects a true random distribution. It is also this tendency for randomness to produce clumps and voids that leads to some unintuitive outcomes.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://theconversation.com/daniel-kahneman-on-noise-the-flaw-in-human-judgement-harder-to-detect-than-cognitive-bias-160525" target="_blank" rel="noopener"&gt;https://theconversation.com/daniel-kahneman-on-noise-the-flaw-in-human-judgement-harder-to-detect-than-cognitive-bias-160525&lt;/a&gt;&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;figure id="figure-instabilité-etienne-reyhttpslaurentperrinetgithubiopost2018-09-09_artorama"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/post/2018-09-09_artorama/featured.png" alt="[Instabilité, Etienne Rey.](https://laurentperrinet.github.io/post/2018-09-09_artorama/)" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/post/2018-09-09_artorama/" target="_blank" rel="noopener"&gt;Instabilité, Etienne Rey.&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;this was for instance used by the artist Etienne Rey to generate large panels&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;our perception will generate objects out of nowhere: surfaces, groups, holes&amp;hellip;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;this explains many cognitive biases, for instance that we expect noise to have some regularity and that we wish to explain any cluster of events by some god-like divinity&amp;hellip;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsfrwikipediaorgwikicydonia_mensae"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://fr.wikipedia.org/wiki/Cydonia_Mensae)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Cydonia_Mensae" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;going further &amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsfrwikipediaorgwikicydonia_mensae"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://fr.wikipedia.org/wiki/Cydonia_Mensae)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Cydonia_Mensae" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;when going to the same place a few years later &amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="visual-illusions--pareidolia-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Visual illusions&lt;/a&gt; : &lt;a href="https://en.wikipedia.org/wiki/Pareidolia" target="_blank" rel="noopener"&gt;Pareidolia&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsfrwikipediaorgwikicydonia_mensae"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://fr.wikipedia.org/wiki/Cydonia_Mensae)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Cydonia_Mensae" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;the face was gone &amp;hellip;&lt;/li&gt;
&lt;li&gt;conclusion 1: information pops out from noise&lt;/li&gt;
&lt;li&gt;conclusion 2: further information may change the interpretation&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="sequence-prediction"&gt;Sequence prediction&lt;/h1&gt;
&lt;video controls &gt;
&lt;source src="https://github.com/chloepasturel/AnticipatorySPEM/raw/master/2020-03_video-abstract/Bet_eyeMvt/eyeMvt.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;to test this in the lab, we analyzed the response of observers to a sequences of left / right moving dots&lt;/li&gt;
&lt;li&gt;These were presented in multiple blocks of 50 trials for which we recorded eye movements and, on a subsequent day, asked them&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="sequence-prediction-1"&gt;Sequence prediction&lt;/h1&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;A: 👍👍👍👍🤘👍👍👍👍👍🤘👍👍👍👍🤘👍👍👍👍👍🤘👍🤘👍👍👍👍👍🤘 ?
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;B: 👍🤘🤘🤘👍👍👍🤘🤘👍🤘👍🤘👍👍🤘👍🤘👍👍👍🤘👍🤘👍🤘🤘🤘👍🤘 ?
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;C: 👍🤘🤘🤘👍🤘👍👍🤘🤘🤘🤘🤘🤘👍👍🤘👍🤘🤘🤘👍🤘👍🤘🤘🤘🤘🤘👍 ?
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;D: 🤘🤘🤘🤘🤘👍🤘🤘🤘👍🤘🤘🤘🤘👍🤘👍👍👍👍👍🤘👍🤘👍👍👍👍👍🤘 ?
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;to simplify the problem, let&amp;rsquo;s show these sequences as the sequence of these 2 emojis&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In sequence A, what do you think the next&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the same question could be asked in an online fashion&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;in sequence B, it&amp;rsquo;s certainly the same answer, yet with lower certitude&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;in sequence C, you go metal 🤘&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;in sequence D, it&amp;rsquo;s different there is a clearly a tendance for 🤘but that it switches to 👍&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;is it possible that the brain may detect such switches?&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="sequence-prediction-2"&gt;Sequence prediction&lt;/h1&gt;
&lt;figure id="figure-pasturel-et-al-2020httpslaurentperrinetgithubiopublicationpasturel-montagnini-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/pasturel-montagnini-perrinet-20/synthesis.png" alt="([Pasturel *et al*, 2020](https://laurentperrinet.github.io/publication/pasturel-montagnini-perrinet-20/))." loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
(&lt;a href="https://laurentperrinet.github.io/publication/pasturel-montagnini-perrinet-20/" target="_blank" rel="noopener"&gt;Pasturel &lt;em&gt;et al&lt;/em&gt;, 2020&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;to synthesize, we have a generative model&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;we found the mathematically optimal problem - and found that both eye movements + bets follow the model with switches&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The aleatoric noise is transformed into a measure of knowledge = epistemic noise&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="epistemic-noise"&gt;Epistemic noise&lt;/h1&gt;
&lt;!--
---
# Playing with noise
&lt;figure id="figure-nash-equilibrium-rock-paper-scissorshttpsenwikipediaorgwikirock_paper_scissors"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/6/67/Rock-paper-scissors.svg" alt="Nash equilibrium ([Rock paper scissors](https://en.wikipedia.org/wiki/Rock_paper_scissors))." loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Nash equilibrium (&lt;a href="https://en.wikipedia.org/wiki/Rock_paper_scissors" target="_blank" rel="noopener"&gt;Rock paper scissors&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;let&amp;rsquo;s go back to game theory&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Rock paper scissors: Its French name, &amp;ldquo;Chi-fou-mi&amp;rdquo;, is based on the Old Japanese words for &amp;ldquo;one, two, three&amp;rdquo; (&amp;ldquo;hi, fu, mi&amp;rdquo;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Nash Equilibrium is a game theory concept that determines the optimal solution in a non-cooperative game in which each player lacks any incentive to change his/her initial strategy. Under the Nash equilibrium, a player does not gain anything from deviating from their initially chosen strategy, assuming the other players also keep their strategies unchanged.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.quantamagazine.org/the-game-theory-math-behind-rock-paper-scissors-20180402/" target="_blank" rel="noopener"&gt;https://www.quantamagazine.org/the-game-theory-math-behind-rock-paper-scissors-20180402/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
---
&lt;figure id="figure-prisoners-dilemma-salem-marafihttpwwwsalemmaraficombusinessprisoners-dilemma"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="http://www.salemmarafi.com/wp-content/uploads/2011/10/prisoners_dilemma.jpg" alt="Prisoner’s Dilemma ([Salem Marafi](http://www.salemmarafi.com/business/prisoners-dilemma/))." loading="lazy" data-zoomable width="60%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Prisoner’s Dilemma (&lt;a href="http://www.salemmarafi.com/business/prisoners-dilemma/" target="_blank" rel="noopener"&gt;Salem Marafi&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;uncertainty comes not from aleatoric noise but from not knowing: epistemic uncertainty&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt; --&gt;
&lt;hr&gt;
&lt;h1 id="representing-uncertainty"&gt;Representing uncertainty&lt;/h1&gt;
&lt;figure id="figure-visual-epistemic-uncertainty-hugo-ladrethttpstheconversationcomle-jeu-du-cerveau-et-du-hasard-159388"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://images.theconversation.com/files/407867/original/file-20210623-17-ai1gc3.png" alt="Visual epistemic uncertainty ([Hugo Ladret](https://theconversation.com/le-jeu-du-cerveau-et-du-hasard-159388))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual epistemic uncertainty (&lt;a href="https://theconversation.com/le-jeu-du-cerveau-et-du-hasard-159388" target="_blank" rel="noopener"&gt;Hugo Ladret&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;in the case of images, a local patch may have the same most likely orientation, yet with different bandwidth (textures)&lt;/li&gt;
&lt;li&gt;the primary visual cortex of mammals like humans is to detect orientations&lt;/li&gt;
&lt;li&gt;will the response be the same for both cases?&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="representing-uncertainty-1"&gt;Representing uncertainty&lt;/h1&gt;
&lt;figure id="figure-visual-epistemic-uncertainty-hugo-ladrethttpslaurentperrinetgithubiopublicationladret-23"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/ladret-23/featured.png" alt="Visual epistemic uncertainty ([Hugo Ladret](https://laurentperrinet.github.io/publication/ladret-23/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Visual epistemic uncertainty (&lt;a href="https://laurentperrinet.github.io/publication/ladret-23/" target="_blank" rel="noopener"&gt;Hugo Ladret&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="conclusion"&gt;Conclusion&lt;/h1&gt;
&lt;hr&gt;
&lt;h1 id="game-theory-and-brain-strategies-2"&gt;Game theory and brain strategies&lt;/h1&gt;
&lt;img src="https://laurentperrinet.github.io/publication/perrinet-21-hasard/featured.jpg" width="80%" &gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;In face of noise, the brain plays a game&lt;/li&gt;
&lt;li&gt;Evolution favors not fitness but adaptability&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="game-theory-and-brain-strategies-3"&gt;Game theory and brain strategies&lt;/h1&gt;
&lt;figure id="figure-aleatoric-uncertainty-pasturel-et-al-2020httpslaurentperrinetgithubiopublicationpasturel-montagnini-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/publication/pasturel-montagnini-perrinet-20/synthesis.png" alt="Aleatoric uncertainty ([Pasturel *et al*, 2020](https://laurentperrinet.github.io/publication/pasturel-montagnini-perrinet-20/))." loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Aleatoric uncertainty (&lt;a href="https://laurentperrinet.github.io/publication/pasturel-montagnini-perrinet-20/" target="_blank" rel="noopener"&gt;Pasturel &lt;em&gt;et al&lt;/em&gt;, 2020&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;The brain uses predictive coding, for instance for sequence learning&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="game-theory-and-brain-strategies-4"&gt;Game theory and brain strategies&lt;/h1&gt;
&lt;figure id="figure-epistemic-uncertainty-hugo-ladrethttpstheconversationcomle-jeu-du-cerveau-et-du-hasard-159388"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://images.theconversation.com/files/407867/original/file-20210623-17-ai1gc3.png" alt="Epistemic uncertainty ([Hugo Ladret](https://theconversation.com/le-jeu-du-cerveau-et-du-hasard-159388))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Epistemic uncertainty (&lt;a href="https://theconversation.com/le-jeu-du-cerveau-et-du-hasard-159388" target="_blank" rel="noopener"&gt;Hugo Ladret&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;For this, it represents explictly uncertainty (epistemic noise)&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h1 id="questions"&gt;Questions?&lt;/h1&gt;
&lt;p&gt;Ask info @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More info @ &lt;a href="https://laurentperrinet.github.io/slides/2023-01-23_game-theory-and-the-brain" target="_blank" rel="noopener"&gt;web-site&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>2022-11-21_flash-lag-effect</title><link>https://laurentperrinet.github.io/slides/2022-11-21_flash-lag-effect/</link><pubDate>Mon, 21 Nov 2022 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2022-11-21_flash-lag-effect/</guid><description>&lt;table width="100%"&gt;
&lt;tr&gt;
&lt;th width="80%"&gt;
&lt;img src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/header.png" width="100%" &gt;
&lt;th width="20%"&gt;
&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/coverart.jpg" width="100%" &gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;[2022-11-21] Alex Reynaud&amp;rsquo;s lab meeting&lt;/strong&gt;&lt;/p&gt;
&lt;p style="color:blue;font-size:25px;"&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2022-11-21_flash-lag-effect"&gt;https://laurentperrinet.github.io/slides/2022-11-21_flash-lag-effect&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;ul&gt;
&lt;li&gt;Only the speaker can read these notes&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to view&lt;/li&gt;
&lt;/ul&gt;
&lt;/aside&gt;
&lt;!--
---
&lt;table width="100%"&gt;
&lt;tr&gt;
&lt;th width="80%"&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/khoei-masson-perrinet-17/header.png" alt="" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/th&gt;
&lt;th width="20%"&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/coverart.jpg" alt="" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
__[2022-11-21] Alex Reynaud's lab meeting__
&lt;p style="color:blue;font-size:25px;"&gt;
&lt;a href="https://laurentperrinet.github.io/slides/2022-11-21_flash-lag-effect"&gt;https://laurentperrinet.github.io/slides/2022-11-21_flash-lag-effect&lt;/a&gt;&lt;/p&gt; --&gt;
&lt;!-- ---
|
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/khoei-masson-perrinet-17/header.png" alt="" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/coverart.jpg" alt="" loading="lazy" data-zoomable width="29%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
|
__[2022-11-21] Alex Reynaud's lab meeting__
https://laurentperrinet.github.io/slides/2022-11-21_flash-lag-effect
--&gt;
&lt;hr&gt;
&lt;h2 id="timing-in-the-visual-pathways"&gt;Timing in the visual pathways&lt;/h2&gt;
&lt;hr&gt;
&lt;figure id="figure-ultra-rapid-visual-processing-see-reviewhttpslaurentperrinetgithubiopublicationgrimaldi-22-polychronies"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-polychronies/featured.jpg" alt="Ultra-rapid visual processing ([see review](https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Ultra-rapid visual processing (&lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener"&gt;see review&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-compensating-visual-delays-perrinet-adams--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/tsonga.jpg" alt="Compensating visual delays ([Perrinet, Adams &amp; Friston 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Compensating visual delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet, Adams &amp;amp; Friston 2014&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-compensating-visual-delays-perrinet-adams--friston-2014httpslaurentperrinetgithubiopublicationperrinet-adams-friston-14"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/figure-tsonga.jpg" alt="Compensating visual delays ([Perrinet Adams &amp; Friston, 2014](https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Compensating visual delays (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-adams-friston-14/" target="_blank" rel="noopener"&gt;Perrinet Adams &amp;amp; Friston, 2014&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="travelling-waves"&gt;Travelling waves?&lt;/h2&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/line_motion.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/phi_motion.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-suppressive-travelling-waves-chemla-et-al-2019httpslaurentperrinetgithubiopublicationchemla-19"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://raw.githubusercontent.com/laurentperrinet/2019-04-18_JNLF/master/figures/Chemla_etal2019.png" alt="Suppressive travelling waves ([Chemla *et al*, 2019](https://laurentperrinet.github.io/publication/chemla-19/))." loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Suppressive travelling waves (&lt;a href="https://laurentperrinet.github.io/publication/chemla-19/" target="_blank" rel="noopener"&gt;Chemla &lt;em&gt;et al&lt;/em&gt;, 2019&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="predictive-coding"&gt;Predictive coding&lt;/h2&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/aperture_aperture.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;!--
---
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/aperture_box.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/aperture_cube.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-motion-based-prediction-perrinet-et-al-2012httpslaurentperrinetgithubiopublicationperrinet-12-pred"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/navier.svg" alt="Motion-based prediction ([Perrinet *et al*, 2012](https://laurentperrinet.github.io/publication/perrinet-12-pred/))." loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Motion-based prediction (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-12-pred/" target="_blank" rel="noopener"&gt;Perrinet &lt;em&gt;et al&lt;/em&gt;, 2012&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-motion-based-prediction-perrinet-et-al-2012httpslaurentperrinetgithubiopublicationperrinet-12-pred"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/perrinet12pred_figure2.png" alt="Motion-based prediction ([Perrinet *et al*, 2012](https://laurentperrinet.github.io/publication/perrinet-12-pred/))." loading="lazy" data-zoomable width="61%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Motion-based prediction (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-12-pred/" target="_blank" rel="noopener"&gt;Perrinet &lt;em&gt;et al&lt;/em&gt;, 2012&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!-- ---
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/line-nopred_particles.mp4" type="video/mp4"&gt;
&lt;/video&gt;
---
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/line_particles.mp4" type="video/mp4"&gt;
&lt;/video&gt;
--&gt;
&lt;hr&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/line_particles.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;Motion-based prediction (&lt;a href="https://laurentperrinet.github.io/publication/perrinet-12-pred/" target="_blank" rel="noopener"&gt;Perrinet &lt;em&gt;et al&lt;/em&gt;, 2012&lt;/a&gt;).&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="flash-lag-effect"&gt;Flash-lag effect&lt;/h2&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-flash-lag-effect-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_cartoon.jpg" alt="Flash-lag effect ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Flash-lag effect (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!--
---
&lt;figure id="figure-diagonal-markov-model-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://journals.plos.org/ploscompbiol/article/figure/image?size=large&amp;amp;id=10.1371/journal.pcbi.1005068.g002" alt="Diagonal markov model ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov model (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
https://journals.plos.org/ploscompbiol/article/figure/image?size=large&amp;id=10.1371/journal.pcbi.1005068.g002
---
&lt;figure id="figure-diagonal-markov"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov.jpg" alt="Diagonal markov" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov
&lt;/figcaption&gt;&lt;/figure&gt;
---
&lt;figure id="figure-diagonal-markov"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov_simple.jpg" alt="Diagonal markov" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov
&lt;/figcaption&gt;&lt;/figure&gt;
---
&lt;figure id="figure-diagonal-markov-pull"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov_pull.jpg" alt="Diagonal markov (pull)" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal markov (pull)
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;!--
---
&lt;video autoplay loop &gt;
&lt;source src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/blob/master/figures/PBP_flash_spatial_readout.mp4?raw=true" type="video/mp4?raw=true"&gt;
&lt;/video&gt;
---
&lt;video autoplay loop &gt;
&lt;source src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/blob/master/figures/PBP_dot_spatial_readout.mp4?raw=true" type="video/mp4?raw=true"&gt;
&lt;/video&gt;
---
&lt;video autoplay loop &gt;
&lt;source src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/blob/master/figures/MBP_flash_spatial_readout.mp4?raw=true" type="video/mp4?raw=true"&gt;
&lt;/video&gt;
---
&lt;video autoplay loop &gt;
&lt;source src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/blob/master/figures/MBP_dot_spatial_readout.mp4?raw=true" type="video/mp4?raw=true"&gt;
&lt;/video&gt;
--&gt;
&lt;!-- ---
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
Flash-lag effect: MBP ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/)).
---
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
Flash-lag effect: MBP ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/)). --&gt;
&lt;hr&gt;
&lt;p&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;/p&gt;
&lt;p&gt;Flash-lag effect: MBP (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).&lt;/p&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/sciblog/files/2016-07-07_EDP-proba/figures/positional-delay.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-flash-lag-effect-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE.jpg" alt="Flash-lag effect ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Flash-lag effect (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!--
---
&lt;figure id="figure-qauntitative-result"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE.jpg" alt="Qauntitative result" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Qauntitative result
&lt;/figcaption&gt;&lt;/figure&gt;
https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE.jpg
https://github.com/laurentperrinet/Khoei_2017_PLoSCB/blob/master/figures/MBP_dot_spatial_readout.mp4?raw=true
MBP_dot_spatial_readout.mp4
MBP_flash_spatial_readout.mp4
MBP_spatial_readout.mp4
PBP_dot_spatial_readout.mp4
PBP_flash_spatial_readout.mp4
https://github.com/laurentperrinet/Khoei_2017_PLoSCB/blob/master/figures/MBP_dot_spatial_readout.mp4?raw=true
PBP_spatial_readout.mp4
src="../../publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4" khoei-masson-perrinet-17
create mode 100644 publication/khoei-masson-perrinet-17/MBP_spatial_readout.mp4
create mode 100644 publication/khoei-masson-perrinet-17/PBP_spatial_readout.mp4
--&gt;
&lt;hr&gt;
&lt;figure id="figure-probability-distributions-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_histogram.jpg" alt="Probability distributions ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Probability distributions (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-probability-distributions-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_histogram_comp.jpg" alt="Probability distributions ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Probability distributions (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-motion-reversal-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_MotionReversal_MBP.jpg" alt="Motion reversal ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Motion reversal (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-motion-reversal-smoothed-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_MotionReversal.jpg" alt="Motion reversal (smoothed) ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Motion reversal (smoothed) (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/perrinet-19-temps/flash_lag_stop.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;figure id="figure-probability-distributions-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_histogram.jpg" alt="Probability distributions ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Probability distributions (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-limit-cycles-khoei-et-al-2017httpslaurentperrinetgithubiopublicationkhoei-masson-perrinet-17"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_limit_cycles.jpg" alt="Limit cycles ([Khoei *et al*, 2017](https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/))." loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Limit cycles (&lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;Khoei &lt;em&gt;et al&lt;/em&gt;, 2017&lt;/a&gt;).
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;!--
---
&lt;figure id="figure-diagonal-neural"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/Khoei_2017_PLoSCB/raw/master/figures/FLE_DiagonalMarkov_neural.jpg" alt="Diagonal neural" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Diagonal neural
&lt;/figcaption&gt;&lt;/figure&gt;
--&gt;
&lt;hr&gt;
&lt;figure id="figure-application-to-the-pulfrich-phenomenonhttpseyewikiaaoorgpulfrich_phenomenon"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://eyewiki.aao.org/w/images/1/e/eb/Pulfrich.png" alt="Application to the [Pulfrich phenomenon](https://eyewiki.aao.org/Pulfrich_Phenomenon)?" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Application to the &lt;a href="https://eyewiki.aao.org/Pulfrich_Phenomenon" target="_blank" rel="noopener"&gt;Pulfrich phenomenon&lt;/a&gt;?
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h1 id="questions"&gt;Questions?&lt;/h1&gt;
&lt;p&gt;Ask info @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More info @ &lt;a href="https://laurentperrinet.github.io/slides/2022-11-21_flash-lag-effect" target="_blank" rel="noopener"&gt;web-site&lt;/a&gt; + &lt;a href="https://laurentperrinet.github.io/publication/khoei-masson-perrinet-17/" target="_blank" rel="noopener"&gt;paper&lt;/a&gt;&lt;/p&gt;</description></item><item><title>2022-03-23_UE-neurosciences-computationnelles</title><link>https://laurentperrinet.github.io/slides/2022-03-23_ue-neurosciences-computationnelles/</link><pubDate>Wed, 23 Mar 2022 09:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2022-03-23_ue-neurosciences-computationnelles/</guid><description>&lt;h1 id="réseaux-de-neurones-artificiels-et-apprentissage-machine-appliqués-à-la-compréhension-de-la-vision"&gt;&lt;a href="https://github.com/laurentperrinet/2022_UE-neurosciences-computationnelles" target="_blank" rel="noopener"&gt;Réseaux de neurones artificiels et apprentissage machine appliqués à la compréhension de la vision&lt;/a&gt;&lt;/h1&gt;
&lt;h4 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2022-03-23-ue-neurosciences-computationnelles/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h4&gt;
&lt;h4 id="-master-1-neurosciences-et-sciences-cognitives"&gt;&lt;u&gt;&lt;a href="https://ametice.univ-amu.fr/pluginfile.php/5559779/mod_resource/content/1/Planning_Neurocomp_M1_2022.pdf" target="_blank" rel="noopener"&gt;[2022-03-23]&lt;/a&gt; &lt;a href="https://ametice.univ-amu.fr/course/view.php?id=89069" target="_blank" rel="noopener"&gt;Master 1 Neurosciences et Sciences Cognitives&lt;/a&gt;&lt;/u&gt;&lt;/h4&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.png" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;hr&gt;
&lt;h1 id="principes-de-la-vision"&gt;Principes de la Vision&lt;/h1&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision"&gt;À quoi sert la vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-ilya-repin-1884httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_001.jpg" alt="[An Unexpected Visitor (Ilya Repin, 1884)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Ilya Repin, 1884)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision-1"&gt;À quoi sert la vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_002.jpg" alt="[An Unexpected Visitor (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision-2"&gt;À quoi sert la vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---age-yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_003.jpg" alt="[An Unexpected Visitor - *Age?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;Age?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="à-quoi-sert-la-vision-3"&gt;À quoi sert la vision?&lt;/h2&gt;
&lt;figure id="figure-an-unexpected-visitor---how-long--yarbus-1965httpswwwcabinetmagazineorgissues30archibaldphp"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://www.cabinetmagazine.org/issues/30/cabinet_030_archibald_sasha_006.jpg" alt="[An Unexpected Visitor - *How long?* (Yarbus, 1965)](https://www.cabinetmagazine.org/issues/30/archibald.php)" loading="lazy" data-zoomable width="45%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://www.cabinetmagazine.org/issues/30/archibald.php" target="_blank" rel="noopener"&gt;An Unexpected Visitor - &lt;em&gt;How long?&lt;/em&gt; (Yarbus, 1965)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="les-illusions-visuelles"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Les illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion_without.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="les-illusions-visuelles-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Les illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-hering-illusionhttpsenwikipediaorgwikihering_illusion"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Hering_illusion.svg" alt="[Hering illusion](https://en.wikipedia.org/wiki/Hering_illusion)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://en.wikipedia.org/wiki/Hering_illusion" target="_blank" rel="noopener"&gt;Hering illusion&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="les-illusions-visuelles-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Les illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;
&lt;video controls &gt;
&lt;source src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Kitaoka.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Ilusions of brightness or lightness &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="les-illusions-visuelles-3"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Les illusions visuelles&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-rotating-snakes-akiyoshi-kitaokahttpwwwritsumeiacjpakitaokaindex-ehtml"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/42_rotsnakes_main.jpg" alt="[Rotating Snakes *Akiyoshi KITAOKA*](http://www.ritsumei.ac.jp/~akitaoka/index-e.html)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html" target="_blank" rel="noopener"&gt;Rotating Snakes &lt;em&gt;Akiyoshi KITAOKA&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="les-illusions-visuelles--paréidolie"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Les illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%C3%A9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-1976-viking-orbiter-imagehttpsfrwikipediaorgwikicydonia_mensae"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Face-on-mars.jpg" alt="[Cydonia Mensae (1976) *Viking Orbiter image*](https://fr.wikipedia.org/wiki/Cydonia_Mensae)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Cydonia_Mensae" target="_blank" rel="noopener"&gt;Cydonia Mensae (1976) &lt;em&gt;Viking Orbiter image&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="les-illusions-visuelles--paréidolie-1"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Les illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%C3%A9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsfrwikipediaorgwikicydonia_mensae"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_low.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://fr.wikipedia.org/wiki/Cydonia_Mensae)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Cydonia_Mensae" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="les-illusions-visuelles--paréidolie-2"&gt;&lt;a href="https://laurentperrinet.github.io/publication/perrinet-19-illusions/" target="_blank" rel="noopener"&gt;Les illusions visuelles&lt;/a&gt; : &lt;a href="https://fr.wikipedia.org/wiki/Par%C3%A9idolie" target="_blank" rel="noopener"&gt;Paréidolie&lt;/a&gt;&lt;/h2&gt;
&lt;figure id="figure-cydonia-mensae-2007-mars-global-surveyorhttpsfrwikipediaorgwikicydonia_mensae"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/Viking_moc_face_20m_high.png" alt="[Cydonia Mensae (2007) *Mars Global Surveyor*](https://fr.wikipedia.org/wiki/Cydonia_Mensae)" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Cydonia_Mensae" target="_blank" rel="noopener"&gt;Cydonia Mensae (2007) &lt;em&gt;Mars Global Surveyor&lt;/em&gt;&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="les-neurosciences-computationnelles"&gt;Les neurosciences computationnelles&lt;/h2&gt;
&lt;figure id="figure-sejnowski--koch---churchland-1998httpwwwhmsharvardedubssneurobornlabnb204paperssejnowski-koch-churchland-science1988pdf"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Churchland92.png" alt="[[Sejnowski, Koch &amp; Churchland (1998)](http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf)]" loading="lazy" data-zoomable width="35%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="http://www.hms.harvard.edu/bss/neuro/bornlab/nb204/papers/sejnowski-koch-churchland-science1988.pdf" target="_blank" rel="noopener"&gt;Sejnowski, Koch &amp;amp; Churchland (1998)&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h1 id="de-v1-aux-réseaux-convolutionnels"&gt;De V1 aux réseaux convolutionnels&lt;/h1&gt;
&lt;hr&gt;
&lt;h2 id="le-système-visuel"&gt;Le système visuel&lt;/h2&gt;
&lt;figure id="figure-système-visuel-humain-wikipediahttpsfrwikipediaorgwikisystc3a8me_visuel_humain"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/e/e4/Voies_visuelles3.svg" alt="[Système visuel humain (Wikipedia)](https://fr.wikipedia.org/wiki/Syst%C3%A8me_visuel_humain)" loading="lazy" data-zoomable width="40%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://fr.wikipedia.org/wiki/Syst%C3%A8me_visuel_humain" target="_blank" rel="noopener"&gt;Système visuel humain (Wikipedia)&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="le-cortex-visuel-primaire"&gt;Le cortex visuel primaire&lt;/h2&gt;
&lt;figure id="figure-hubel--wiesel-1962"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/scientists.jpg" alt="[Hubel &amp; Wiesel, 1962]" loading="lazy" data-zoomable width="80%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Hubel &amp;amp; Wiesel, 1962]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="hubel--wiesel"&gt;Hubel &amp;amp; Wiesel&lt;/h2&gt;
&lt;video controls &gt;
&lt;source src="https://raw.githubusercontent.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/master/figures/ComplexDirSelCortCell250_title.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;p&gt;[Hubel &amp;amp; Wiesel, 1962]&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-convolutionnels--hiérarchie"&gt;Réseaux convolutionnels : hiérarchie&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-convolutionnels---math"&gt;Réseaux convolutionnels : Math&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution discrète uni-dimensionnelle (eg dans le temps) avec un noyau f de rayon $K$:
$$
(f \ast g)[n]=\sum_{m=-K}^{K} f[m] g[n-m]
$$&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-convolutionnels---math-1"&gt;Réseaux convolutionnels : Math&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution discrète d&amp;rsquo;une image (bi-dimensionnelle):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[i, j] g[i-x, j-y]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-convolutionnels--lopération-de-convolution"&gt;Réseaux convolutionnels : l&amp;rsquo;opération de convolution&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/convolution-layer-a.png?1c517e00cb8d709baf32fc3d39ebae67" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-convolutionnels--math"&gt;Réseaux convolutionnels : Math&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution discrète d&amp;rsquo;une image sur plusieurs canaux de sortie:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y, k] = \sum_{i=-K}^{K} \sum_{j=-K}^{K} f[k, i, j, k] g[i-x, j-y]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-convolutionnels--math-1"&gt;Réseaux convolutionnels : Math&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Convolution discrète d&amp;rsquo;une image multi-canaux (eg. RGB) sur plusieurs canaux de sortie (noter &lt;a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" target="_blank" rel="noopener"&gt;l&amp;rsquo;ordre des indices&lt;/a&gt;):&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;$$
(f \ast g)[x, y, k] = \
\sum_{i=-K}^{K} \sum_{j=-K}^{K} \sum_{c=1}^{C} f[k, c, i, j] g[i-x, j-y, c]
$$&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-convolutionnels--cnn"&gt;Réseaux convolutionnels : CNN&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/architecture-cnn-fr.jpeg" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="mise-en-pratique-détecter--apprendre"&gt;Mise en pratique: détecter &amp;amp; apprendre&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Tutoriel Apprentissage profond&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/laurentperrinet/2022_UE-neurosciences-computationnelles/blob/master/A_D%C3%A9tecter.ipynb" target="_blank" rel="noopener"&gt;Notebook &lt;code&gt;A_Détecter.ipynb&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/laurentperrinet/2022_UE-neurosciences-computationnelles/blob/master/B_Apprendre.ipynb" target="_blank" rel="noopener"&gt;Notebook &lt;code&gt;B_Apprendre.ipynb&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h1 id="perspectives"&gt;Perspectives&lt;/h1&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-convolutionnels--hiérarchie-1"&gt;Réseaux convolutionnels : hiérarchie&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1_a.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="réseaux-prédictifs"&gt;Réseaux prédictifs&lt;/h2&gt;
&lt;figure id="figure-boutin-et-al-2021httpslaurentperrinetgithubiopublicationboutin-franciosini-chavane-ruffier-perrinet-20"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2019-04-03_a_course_on_vision_and_modelization/figures/boutin-franciosini-ruffier-perrinet-19_figure1.svg" alt="[[Boutin *et al*, 2021](https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://laurentperrinet.github.io/publication/boutin-franciosini-chavane-ruffier-perrinet-20/" target="_blank" rel="noopener"&gt;Boutin &lt;em&gt;et al&lt;/em&gt;, 2021&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="topographie-dans-v1"&gt;Topographie dans V1&lt;/h2&gt;
&lt;figure id="figure-bosking-et-al-1997"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2019-04-03_a_course_on_vision_and_modelization/raw/master/figures/Bosking97Fig4.jpg" alt="[Bosking *et al*, 1997]" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[Bosking &lt;em&gt;et al&lt;/em&gt;, 1997]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="recurrent-processing"&gt;Recurrent processing&lt;/h2&gt;
&lt;figure id="figure-amidi--amidihttpsstanfordedushervineteachingcs-230cheatsheet-convolutional-neural-networks"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://stanford.edu/~shervine/teaching/cs-230/illustrations/architecture-rnn-ltr.png" alt="[[Amidi &amp; Amidi](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks)]" loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
[&lt;a href="https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks" target="_blank" rel="noopener"&gt;Amidi &amp;amp; Amidi&lt;/a&gt;]
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="dynamique-de-la-vision"&gt;Dynamique de la vision&lt;/h2&gt;
&lt;figure id="figure-thorpe-2001httpslaurentperrinetgithubio2022-01-12_neurocercle21"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/figures/scheme_thorpe.jpg" alt="[[Thorpe (2001)]](https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1)" loading="lazy" data-zoomable width="70%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&lt;a href="https://laurentperrinet.github.io/2022-01-12_NeuroCercle/#/2/1" target="_blank" rel="noopener"&gt;[Thorpe (2001)]&lt;/a&gt;
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="applications-robotiques"&gt;Applications robotiques&lt;/h2&gt;
&lt;figure id="figure-our-system-is-divided-into-3-units-to-process-visual-inputs-communicating-by-event-driven-feed-forward-and-feed-back-communications"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/principe_agile.jpg" alt="Our system is divided into 3 units to process visual inputs communicating by event-driven, feed-forward and feed-back communications." loading="lazy" data-zoomable width="90%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Our system is divided into 3 units to process visual inputs communicating by event-driven, feed-forward and feed-back communications.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h1 id="questions"&gt;Questions?&lt;/h1&gt;
&lt;p&gt;Ask info @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More info @ &lt;a href="https://laurentperrinet.github.io/grant/anr-anr" target="_blank" rel="noopener"&gt;web-site&lt;/a&gt;&lt;/p&gt;</description></item><item><title>2020-12-10_agileneurobot_anr</title><link>https://laurentperrinet.github.io/slides/2020-12-10_agileneurobot_anr/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2020-12-10_agileneurobot_anr/</guid><description>&lt;a href="https://laurentperrinet.github.io/grant/anr-anr"&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/header.png" alt="header" height="450"&gt;
&lt;/a&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;&lt;a href="https://laurentperrinet.github.io/slides/2020-12-10_agileneurobot_anr"&gt;
Présentation du projet - L. Perrinet
&lt;!-- &lt;img src="http://www.cnrs.fr/themes/custom/cnrs/logo.svg" alt="CNRS" height="15"&gt; --&gt;
&lt;!-- &lt;img src="https://upload.wikimedia.org/wikipedia/en/thumb/2/2c/CNRS.svg/240px-CNRS.svg.png" alt="CNRS" height="40"&gt; --&gt;
&lt;br&gt;
&lt;u&gt;[2020-12-10] Réunion de lancement&lt;/u&gt;
&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/featured.png" alt="ANR" height="80"&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2 id="agileneurobot-fiche-didentité"&gt;AgileNeuRobot: Fiche d&amp;rsquo;identité&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Titre : Robots aériens agiles bio-mimetiques pour le vol en conditions réelles&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Title : Bio-mimetic agile aerial robots flying in real-life conditions&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;CES : CE23 - Intelligence Artificielle (ANR-20-CE23-0021)&lt;/li&gt;
&lt;li&gt;Durée: 3 ans, à partir du 1er mars 2021&lt;/li&gt;
&lt;li&gt;Budget total: 435 k€&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="recurrent-processing"&gt;Recurrent processing&lt;/h2&gt;
&lt;figure id="figure-our-system-is-divided-into-3-units-to-process-visual-inputs-communicating-by-event-driven-feed-forward-and-feed-back-communications"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/principe_agile.jpg" alt="Our system is divided into 3 units to process visual inputs communicating by event-driven, feed-forward and feed-back communications." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption data-pre="Figure&amp;nbsp;" data-post=":&amp;nbsp;" class="numbered"&gt;
Our system is divided into 3 units to process visual inputs communicating by event-driven, feed-forward and feed-back communications.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="consortium"&gt;Consortium:&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/stéphane-viollet/avatar.jpg" alt="SV" height="150"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/ryad-benosman/avatar.jpg" alt="RB" height="150"&gt;&lt;/th&gt;
&lt;th&gt;&lt;img src="https://laurentperrinet.github.io/author/laurent-u-perrinet/avatar.png" alt="LP" height="150"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Stéphane Viollet&lt;/td&gt;
&lt;td&gt;Ryad Benosman&lt;/td&gt;
&lt;td&gt;Laurent Perrinet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Julien Diperi&lt;/td&gt;
&lt;td&gt;Sio-Hoï Ieng&lt;/td&gt;
&lt;td&gt;Emmanuel Daucé&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Inst Sciences Mouvement&lt;/td&gt;
&lt;td&gt;Inst de la Vision&lt;/td&gt;
&lt;td&gt;Inst Neurosci de la Timone&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2 id="gantt-chart-of-project"&gt;Gantt Chart of project&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://laurentperrinet.github.io/grant/anr-anr/gantt.png" alt="" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h1 id="questions"&gt;Questions?&lt;/h1&gt;
&lt;p&gt;Ask info @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More info @ &lt;a href="https://laurentperrinet.github.io/grant/anr-anr" target="_blank" rel="noopener"&gt;web-site&lt;/a&gt;&lt;/p&gt;</description></item><item><title>2022-07-01_grimaldi-22-areadne</title><link>https://laurentperrinet.github.io/slides/2022-07-01_grimaldi-22-areadne/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2022-07-01_grimaldi-22-areadne/</guid><description>&lt;img src="https://laurentperrinet.github.io/publication/grimaldi-22-areadne/brain-logo-240.jpg" alt="header" height="350"&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;&lt;a href="https://laurentperrinet.github.io/slides/2022-07-01_grimaldi-22-areadne"&gt;
Decoding spiking motifs using neurons with heterogeneous delays
&lt;!-- &lt;img src="http://www.cnrs.fr/themes/custom/cnrs/logo.svg" alt="CNRS" height="15"&gt; --&gt;
&lt;!-- &lt;img src="https://upload.wikimedia.org/wikipedia/en/thumb/2/2c/CNRS.svg/240px-CNRS.svg.png" alt="CNRS" height="40"&gt; --&gt;
&lt;br&gt;
&lt;u&gt;[2022-07-01] AREADNE 2022 conference&lt;/u&gt;
&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2 id="spiking-neural-networks"&gt;Spiking Neural Networks&lt;/h2&gt;
&lt;figure id="figure-from-frame-based-to-event-based-cameras"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../grant/anr-anr/event_driven_computations.png" alt="From frame-based to event-based cameras." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
From frame-based to event-based cameras.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-a-raster-plot"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-areadne/figure_1a_k.png" alt="A raster plot.." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
A raster plot..
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure--as-a-mixture-of-motifs"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-areadne/figure_1a.png" alt=".. as a mixture of motifs" loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
.. as a mixture of motifs
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure--defined-as-list-of-weights-and-delays"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-areadne/figure_1b.png" alt="... defined as list of weights and delays.." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
&amp;hellip; defined as list of weights and delays..
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-occurring-from-a-new-raster-plot"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-areadne/figure_1c.png" alt="occurring from a new raster plot.." loading="lazy" data-zoomable width="95%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
occurring from a new raster plot..
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-22-areadne/LIF.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-22-areadne/HSD_conductance_speeds.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;h2 id="supervised-learning"&gt;supervised learning&lt;/h2&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-22-areadne/2022-06-23_Supervised_MC_input_1.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-22-areadne/2022-06-23_Supervised_MC_input_3.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/publication/grimaldi-22-areadne/2022-06-23_Supervised_MC_input_4.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;video autoplay loop &gt;
&lt;source src="https://laurentperrinet.github.io/talk/2022-06-19-neuro-vision-heterogeneous/2022-05-24_Supervised_MC_MC.mp4" type="video/mp4"&gt;
&lt;/video&gt;
&lt;hr&gt;
&lt;h2 id="learned-heterogeneous-weights"&gt;Learned heterogeneous weights&lt;/h2&gt;
&lt;hr&gt;
&lt;figure id="figure-heterogeneous-delays-as-convolution-kernels"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-areadne/2022-06-26_Supervised_nat-causal_kernel.png" alt="Heterogeneous delays as convolution kernels." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Heterogeneous delays as convolution kernels.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-mask-applied-on-the-weights"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-areadne/2022-06-26_Supervised_nat-causal_kernel-mask.png" alt="Mask applied on the weights." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Mask applied on the weights.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;figure id="figure-scatter-of-on-versus-off-weights"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-areadne/2022-07-08_Supervised_nat_joint_ON-OFF.png" alt="Scatter of ON versus OFF weights." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Scatter of ON versus OFF weights.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h2 id="frugal-computing"&gt;Frugal computing&lt;/h2&gt;
&lt;figure id="figure-stable-accuracy-while-pruning-99-weights"&gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="../../publication/grimaldi-22-areadne/accuracy.png" alt="Stable accuracy while pruning ~99% weights." loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;figcaption&gt;
Stable accuracy while pruning ~99% weights.
&lt;/figcaption&gt;&lt;/figure&gt;
&lt;hr&gt;
&lt;h1 id="questions"&gt;Questions?&lt;/h1&gt;
&lt;p&gt;Ask info @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More info @ &lt;a href="https://laurentperrinet.github.io/publication/grimaldi-22-areadne/" target="_blank" rel="noopener"&gt;web-site&lt;/a&gt;&lt;/p&gt;</description></item></channel></rss>