An efficiency razor for model selection and adaptation in the primary visual cortex

Abstract

We describe the theoretical formulation of a learning algorithm in a model of the primary visual cortex (V1) and present results of the efficiency of this algorithm by comparing it to the SparseNet algorithm (Olshausen, 1996). As the SparseNet algorithm, it is based on a model of signal synthesis as a Linear Generative Model but differs in the efficiency criteria for the representation. This learning algorithm is in fact based on an efficiency criteria based on the Occam razor: for a similar quality, the shortest representation should be privilegied. This inverse problem is NP-complete and we propose here a greedy solution which is based on the architecture and nature of neural computations (Perrinet, 2006). We present here results of a simulation of this network of small natural images (available at https://github.com/bicv/SparseHebbianLearning) and compare it to the SparseNet solution. We show that this solution based on neural computations produces an adaptive algorithm for efficient representations in V1.

Publication
Fifteenth Annual Computational Neuroscience Meeting: CNS2006*
Laurent U Perrinet
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.