We describe the theoretical formulation of a learning algorithm in a model of the primary visual cortex (V1) and present results of the efficiency of this algorithm by comparing it to the SparseNet algorithm . As the SparseNet algorithm, it is based on a model of signal synthesis as a Linear Generative Model but differs in the efficiency criteria for the representation. This learning algorithm is in fact based on an efficiency criteria based on the Occam razor: for a similar quality, the shortest representation should be privileged. This inverse problem is NP-complete and we propose here a greedy solution which is based on the architecture and nature of neural computations ). It proposes that the supra-threshold neural activity progressively removes redundancies in the representation based on a correlation-based inhibition and provides a dynamical implementation close to the concept of neural assemblies from Hebb ). We present here results of simulation of this network with small natural images and compare it to the Sparsenet solution. Extending it to realistic images and to the NEST simulator http://www.nest-initiative.org/, we show that this learning algorithm based on the properties of neural computations produces adaptive and efficient representations in V1. 1. Olshausen B, Field DJ: Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Res 1997, 37:3311-3325. 2. Perrinet L: Feature detection using spikes: the greedy approach. J Physiol Paris 2004, 98(4–6):530-539. 3. Hebb DO: The organization of behavior. Wiley, New York; 1949.