Which sparsity problem does the brain solve?


Experimental evidence suggests that activity in sensory cortices is sparse in that only few neurons out of a large pool that could respond to sensed stimuli, are active at a time. Generative learning models that aim to replicate sensory systems could deviate from sparse activity patterns when representing noisy signals. We ask: are there biologically plausible implementations that will maintain sparse activations for different levels of noise while representing the underlying signal? A family of generative algorithms modelling sensory systems represent a stimulus as a linear sum of an overcomplete dictionary of vectors with their corresponding coefficients taking the role of activations. Olshausen and Field [1] showed that a learning algorithm that is set to reconstruct natural images with sparse activations develops vectors with properties, found in the receptive fields of neurons in V1, i.e. they are localized, band-pass, and oriented.

Proceedings of AREADNE
Laurent U Perrinet
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.