<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Spikes | Next-generation neural computations</title><link>https://laurentperrinet.github.io/tag/spikes/</link><atom:link href="https://laurentperrinet.github.io/tag/spikes/index.xml" rel="self" type="application/rss+xml"/><description>Spikes</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><copyright>This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License Please note that multiple distribution, publication or commercial usage of copyrighted papers included in this website would require submission of a permission request addressed to the journal in which the paper appeared.</copyright><lastBuildDate>Wed, 15 Apr 2026 09:00:00 +0000</lastBuildDate><item><title>Working Memory in SNNs</title><link>https://laurentperrinet.github.io/slides/2026-04-15-airov/</link><pubDate>Wed, 15 Apr 2026 09:00:00 +0000</pubDate><guid>https://laurentperrinet.github.io/slides/2026-04-15-airov/</guid><description>&lt;section&gt;
&lt;!-- no-branding --&gt;
&lt;h1 id="learning-working-memory-in-recurrent-spiking-neural-networks-using-heterogeneous-delays"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-04-15-airov/?transition=fade" target="_blank" rel="noopener"&gt;Learning Working Memory in Recurrent Spiking Neural Networks Using Heterogeneous Delays&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-04-15-airov/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="austrian-symposium-on-ai-robotics-and-vision"&gt;&lt;u&gt;&lt;a href="https://airov.at/2026/index.html" target="_blank" rel="noopener"&gt;Austrian Symposium on AI, Robotics and Vision&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-04-15"&gt;[2026-04-15]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;&lt;em&gt;Hello&lt;/em&gt;, I&amp;rsquo;m Laurent Perrinet from the Institut des Neurosciences de la Timone, a joint AMU / CNRS unit, and during this talk at this AIROV workshop on Recent Advances in SNNs.&lt;/p&gt;
&lt;p&gt;Today, I will be speaking about working memory, that is storing patterns with duration of order seconds, in spiking neural networks. This is a hard problem as spiking neurons have a limited memory of the order of tens of milliseconds. How can one extend this memory to larger durations? Here, I will be presenting a method for building &lt;em&gt;WM in Spiking Neural Networks by using Heterogeneous Delays&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d like to &lt;em&gt;thank&lt;/em&gt; Sander Bohté and Sebastian Otte for the organization of this workshop and you for listening.
These slides are available from my web-site, along with a number of references. The &lt;em&gt;outline&lt;/em&gt; of the talk is as follows: first, I&amp;rsquo;ll describe how one may perform computations using Heterogeneous Delays - and present a toy model example; then, I&amp;rsquo;ll show real scale example quantifying the performance on synthetic data.&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="polychronization"&gt;Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_left.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
The core idea of the method follows the use of polychronous groups as defined by Izhikevich in 2006. Suppose three presynaptic neurons are connected to two postsynaptic neurons by certains weights and certain delays, which correspond to the time it takes for a spike to travel from one neuron to the next.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="polychronization-1"&gt;Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich_middle.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
If we assume these delays are different, then if presynaptic neurons are activated synchronously, then postsynaptic currents do not match in time, such that the membrane potential is not reached.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="polychronization-2"&gt;Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/2023-07-20_HDSNN-ICANN/raw/master/figures/izhikevich.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
However, if the timing of presynaptic spikes forms a &lt;em&gt;spiking motif&lt;/em&gt; such that they reach the soma of neuron b_1 at the same time then this neuron will be selectively activated.
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="polychronization-3"&gt;Polychronization&lt;/h2&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/izhikevich_rec.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Following on this idea - and similar to the original network from Izhikevich - one may build such a process in a recurrent network. Synapses are defined similarly, but act of the same population, not a separate one.&lt;/p&gt;
&lt;p&gt;Given this architecture, and deviating now from Izhikevitch, we may wish to define motifs such that given one context window (green shaded area), it predicts the occurence of the spikes at the next time step. This allows to create a new context and a new prediction, such that we may build&lt;/p&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="methods--bptt-snn-torch---frozen-target"&gt;Methods : BPTT (snn Torch) - frozen target&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/unrolled.svg" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;p&gt;&lt;/span&gt; &lt;span class="fragment " &gt;&lt;/p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/pattern.svg" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;p&gt;Following on this idea - and similar to the original network from Izhikevich - one may build such a process in a recurrent network. Synapses are defined similarly, but act of the same population, not a separate one.&lt;/p&gt;
&lt;p&gt;Given this architecture, and deviating now from Izhikevitch, we may wish to define motifs such that given one context window (green shaded area), it predicts the occurence of the spikes at the next time step. This allows to create a new context and a new prediction, such that we may build&lt;/p&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="methods--weight-initialization"&gt;Methods : Weight initialization&lt;/h2&gt;
&lt;span class="fragment " &gt;
$$ u_j(t)
= \beta \cdot u_j(t-1) \cdot (1 - s_j(t-1))
+ \sum_{i=1}^{N} \bigl (&lt;br&gt;
% b_i +
\sum_{d=1}^{D} \mathbf{W}_{j, i, d} \cdot s_i(t-d) \bigr ),
$$
&lt;/span&gt; &lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/target.svg" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;hr&gt;
&lt;h2 id="dual-column-layout"&gt;Dual Column Layout&lt;/h2&gt;
&lt;div class="r-hstack"&gt;
&lt;div style="flex: 1; padding-right: 1rem;"&gt;
&lt;h3 id="benefits"&gt;Benefits&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Open source&lt;/li&gt;
&lt;li&gt;Version control&lt;/li&gt;
&lt;li&gt;No vendor lock-in&lt;/li&gt;
&lt;li&gt;Works offline&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 1rem;"&gt;
&lt;h3 id="use-cases"&gt;Use Cases&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Tech talks&lt;/li&gt;
&lt;li&gt;Academic papers&lt;/li&gt;
&lt;li&gt;Team updates&lt;/li&gt;
&lt;li&gt;Training sessions&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;h2 id="results--recall-of-target"&gt;Results : recall of target&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/target.svg" alt="" loading="lazy" data-zoomable width="100%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;p&gt;&lt;/span&gt; &lt;span class="fragment " &gt;&lt;/p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/target.svg" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--role-of-parameters"&gt;Results : role of parameters&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_N_SM.svg" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_N_time.svg" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/MNESIS_num_delay.svg" alt="" loading="lazy" data-zoomable width="33%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;hr&gt;
&lt;h2 id="results--recall-of-target-1"&gt;Results : recall of target&lt;/h2&gt;
&lt;span class="fragment " &gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/unrolled.svg" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;p&gt;&lt;/span&gt; &lt;span class="fragment " &gt;&lt;/p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/MNESIS/raw/main/figures/target.svg" alt="" loading="lazy" data-zoomable width="50%" /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/span&gt;
&lt;aside class="notes"&gt;
&lt;/aside&gt;
&lt;/section&gt;
&lt;hr&gt;
&lt;section&gt;
&lt;!-- no-branding --&gt;
&lt;h1 id="learning-working-memory-in-recurrent-spiking-neural-networks-using-heterogeneous-delays-1"&gt;&lt;a href="https://laurentperrinet.github.io/slides/2026-04-15-airov/?transition=fade" target="_blank" rel="noopener"&gt;Learning Working Memory in Recurrent Spiking Neural Networks Using Heterogeneous Delays&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="laurent-perrinet-1"&gt;&lt;em&gt;&lt;a href="https://laurentperrinet.github.io/talk/2026-04-15-airov/" target="_blank" rel="noopener"&gt;Laurent Perrinet&lt;/a&gt;&lt;/em&gt;&lt;/h2&gt;
&lt;h3 id="austrian-symposium-on-ai-robotics-and-vision-1"&gt;&lt;u&gt;&lt;a href="https://airov.at/2026/index.html" target="_blank" rel="noopener"&gt;Austrian Symposium on AI, Robotics and Vision&lt;/a&gt;&lt;/u&gt;&lt;/h3&gt;
&lt;h3 id="2026-04-15-1"&gt;[2026-04-15]&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img src="https://github.com/laurentperrinet/perrinet_curriculum-vitae.tex/raw/master/logotypes/troislogos.jpg" alt="logo" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;Contact me @ &lt;a href="mailto:laurent.perrinet@univ-amu.fr"&gt;laurent.perrinet@univ-amu.fr&lt;/a&gt;&lt;/p&gt;
&lt;/section&gt;</description></item></channel></rss>