Reproducing Olshausen's classical SparseNet (part 3)
In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robustness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:
- first, whatever the learning rate, the convergence is not complete without homeostasis,
- second, we achieve better convergence for similar learning rates and on a certain range of learning rates for the homeostasis
- third, the smoothing parameter
alpha_homeo
has to be properly set to achieve a good convergence. - last, this homeostatic rule works with the different variants of sparse coding.
See also :
- https://laurentperrinet.github.io/sciblog/posts/2017-03-14-reproducing-olshausens-classical-sparsenet.html for a description of how SparseNet is implemented in the scikit-learn package
- https://laurentperrinet.github.io/sciblog/posts/2017-03-15-reproducing-olshausens-classical-sparsenet-part-2.html for a description of how we managed to implement the homeostasis
- In an extension, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ).
This is joint work with Victor Boutin.
In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=2, suppress=True)
In [2]:
from shl_scripts import SHL
list_figures = ['show_dico', 'plot_variance', 'plot_variance_histogram', 'time_plot_prob', 'time_plot_kurt', 'time_plot_var']
DEBUG_DOWNSCALE, verbose = 10, 100
DEBUG_DOWNSCALE, verbose = 10, 10
DEBUG_DOWNSCALE, verbose = 1, 0
N_scan = 7
database = 'database/'
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, verbose=verbose)
data = shl.get_data()
1. With diferent learning rates but without homeostasis¶
Here,we only ensure the norm ofthe filters is constant.
In [3]:
shl = SHL()
data = shl.get_data()
2. Homeostasis à-la-SparseNet¶
In [4]:
shl = SHL()
data = shl.get_data()
for eta in np.logspace(-1, 1, N_scan, base=10)*shl.eta:
matname = 'homeo - eta={}'.format(eta)
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, eta=eta, verbose=verbose)
dico = shl.learn_dico(data=data, matname=matname, list_figures=list_figures)
In [5]:
dico.record_each
Out[5]:
In [6]:
shl = SHL()
data = shl.get_data()
for eta_homeo in np.logspace(-1, 1, N_scan, base=10)*shl.eta_homeo:
matname = 'homeo - eta_homeo={}'.format(eta_homeo)
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, eta_homeo=eta_homeo, verbose=verbose)
dico = shl.learn_dico(data=data, matname=matname, list_figures=list_figures)
3. with different smoothing parameters for the homestatic gain¶
In [7]:
shl = SHL()
data = shl.get_data()
for alpha_homeo in np.logspace(-1, .5, N_scan, base=10)*shl.alpha_homeo:
matname = 'homeo - alpha_homeo={}'.format(alpha_homeo)
print(alpha_homeo)
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, alpha_homeo=alpha_homeo, verbose=verbose)
dico = shl.learn_dico(data=data, matname=matname, list_figures=list_figures)
4. with different sparse parameters¶
In [8]:
N_scan = int(15/(DEBUG_DOWNSCALE)**.3)
for l0_sparseness in 2**np.arange(8):
matname = 'l0_sparseness={}'.format(l0_sparseness)
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, l0_sparseness=l0_sparseness, verbose=verbose)
dico = shl.learn_dico(data=data, matname=matname, list_figures=list_figures)
5. with different sparse coding algorithms¶
In [9]:
learning_algorithms = [
('Matching Pursuit 10 atom', 'MP_N10',
{'learning_algorithm':'mp', 'l0_sparseness': 10}),
('Matching Pursuit 20 atom', 'MP_N20',
{'learning_algorithm':'mp', 'l0_sparseness': 20}),
('Orthogonal Matching Pursuit 10 atom', 'OMP_N10',
{'learning_algorithm':'omp', 'l0_sparseness': 10}),
('Orthogonal Matching Pursuit 20 atom', 'OMP_N20',
{'learning_algorithm':'omp', 'l0_sparseness': 20}),
('Least-angle regression 5 atoms', 'LARS',
{'learning_algorithm':'lars', 'l0_sparseness': 5}),
('Least-angle regression 5 atoms', 'LARS',
{'learning_algorithm':'lasso_lars', 'l0_sparseness': 5}),
('Least-angle regression 5 atoms', 'LARS',
{'learning_algorithm':'lasso_cd', 'l0_sparseness': 5})]
for learning_title, learning_label, learning_kwargs in learning_algorithms:
print('Dictionary learned from image patches using ' + learning_title)
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, database=database,
verbose=verbose, **learning_kwargs)
dico = shl.learn_dico(data=data, matname=learning_label, list_figures=list_figures)
Version used¶
In [ ]:
import version_information
%version_information numpy, shl_scripts