Reproducing Olshausen's classical SparseNet (part 1)
-
This notebook tries to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.
-
the underlying machinery uses a similar dictionary learning as used in the image denoising example from
sklearn
and our aim here is to show that a novel ingredient is necessary to reproduce Olshausen's results. -
All these code bits is regrouped in the SHL scripts repository (where you will also find some older matlab code). You may install it using
pip install git+https://github.com/bicv/SHL_scripts
- following this failed PR to sklearn that was argued in this post (and following) the goal of this notebooks is to illustrate the simpler code implemented in the SHL scripts
This is joint work with Victor Boutin.
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
np.set_printoptions(precision=2, suppress=True)
Let's start by doing a simple learning as implemented in the image denoising example from sklearn
and then show the dictionary:
from shl_scripts.shl_experiments import SHL
DEBUG_DOWNSCALE, verbose = 1, 0
matname = 'olshausen'
shl = SHL(datapath='/tmp/database/', DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, verbose=verbose, eta_homeo=0.)
fig, ax = shl.learn_dico(matname=matname).show_dico(title=matname)
fig.show()
in summary¶
In this notebook, we have replicated the classical SparseNet algorithm of Olshausen on a set of natural images. However, the dictionaries are qualitatively not the same as the one from the original paper, and this is certainly due to the lack of control in the competition during the learning phase.
What differ in this implementation from the original algorithm is mainly the way that the norm of the filters is controlled. Here, sklearn
simply assumes that $ || V_k ||_2 = 1$, $\forall k$ (with $0 <= k < n_{components}$). We will see that this may be a problem.
Let's now try to do that in a new notebook.
In an extension, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ).