A key property of the neurons in the primary visual cortex (V1) is their selectivity to oriented stimuli in the visual field. Orientation selectivity allows the segmentation of objects in natural visual scenes, which is the first step in building integrative representations from retinal inputs. As such, V1 has always been of central interest in creating artificial neural networks and the recent years have seen a growing interest in the creation of explainable yet robust and adaptive models of cortical visual processes, for fundamental or applied purposes. One notable challenge for those models is to behave reliably in generic natural environments, where information is usually hidden in noise, while most models are typically studied with oriented gratings. Here we show that a simple biologically inspired neural network accounts for orientation selectivity to natural-like textures in the cat’s primary visual cortex. Our spiking neural network (SNN) is made of point neurons organized in recurrent and hierarchical layers based on the structure of cortical layers IV and II/III. We found that Spike-timing plasticity and synaptic recurrence allowed the SNN to self-organize its connections weights and reproduce the activity of neurons recorded with laminar probes in cortical areas 17 and 18 of cats, notably orientation tuning responses. After less than 5 seconds of stimulus presentation, the SNN displays narrow orientation selectivity (bandwidth = 10 degrees) characteristic of sparse representations, removes noise from the input and learns the structure of natural pattern repetitions. Our results support the use of natural stimuli to study theoretical and experimental cortical dynamics. Furthermore, this model encourages using SNNs to reduce complexity in cortical networks as a method to understand the separate contribution of different components in the laminar organization of the cortex. From an applied perspective, the computations this network performs could also be used as an alternative to classical blackbox Deep Learning models used in artificial vision.
Interested in orientation selectivity in V1? at #sfn2019 ?— laurentperrinet (@laurentperrinet) October 22, 2019
We tested a model getting different precision levels and then tested these predictions in real neurons ! Check out poster 403.16 / P20 @ https://t.co/iHUv0AHuzl
-> more info :https://t.co/JkXXgC5IVp
🤝 @univamu @CNRS pic.twitter.com/MVBz0UGH70