Reproducing Olshausen's classical SparseNet (part 3)
This is an old blog post, see the newer version in this post
In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robusteness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:
- first, whatever the learning rate, the convergence is not complete without homeostasis,
- second, we achieve better convergence for similar learning rates and on a certain range of learning rates for the homeostasis
- third, the smoothing parameter
alpha_homeo
has to be properly set to achieve a good convergence. - last, this homeostatic rule works with the diferent variants of sparse coding.
See also :
- https://laurentperrinet.github.io/sciblog/posts/2015-05-05-reproducing-olshausens-classical-sparsenet.html for a description of how SparseNet is implemented in the scikit-learn package
- https://laurentperrinet.github.io/sciblog/posts/2015-05-06-reproducing-olshausens-classical-sparsenet-part-2.html for a descrtiption of how we managed to implement the homeostasis
- this PR to sklearn
In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=2, suppress=True)
In [2]:
from shl_scripts import SHL
DEBUG_DOWNSCALE, verbose = 10, 100
DEBUG_DOWNSCALE, verbose = 10, 0
DEBUG_DOWNSCALE, verbose = 1, 0
N_scan = 7
database = '/Users/lolo/pool/science/BICV/SHL_scripts/database/'
1. With diferent learning rates but without homeostasis¶
Here,we only ensure the norm ofthe filters is constant.
In [3]:
for eta in np.logspace(-3, -1.5, N_scan, base=10):
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, eta_homeo=0, database=database,
learning_algorithm='omp', eta=eta, verbose=verbose)
dico = shl.learn_dico()
_ = shl.show_dico(dico, title='no homeo - eta={}'.format(eta))
2. Homeostasis à-la-SparseNet¶
In [4]:
for eta in np.logspace(-3, -1.5, N_scan, base=10):
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, database=database,
learning_algorithm='omp', eta=eta, verbose=verbose)
dico = shl.learn_dico()
_ = shl.show_dico(dico, title='homeo - eta={}'.format(eta))
In [5]:
for eta_homeo in np.logspace(-3., -1.5, N_scan, base=10):
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE,
learning_algorithm='omp', database=database,
transform_n_nonzero_coefs=10, eta_homeo=eta_homeo, verbose=verbose)
dico = shl.learn_dico()
_ = shl.show_dico(dico, title='homeo - eta_homeo={}'.format(eta_homeo))
3. with different smoothing parameters for the homestatic gain¶
In [6]:
for alpha_homeo in np.logspace(-2.5, -0.25, N_scan, base=10):
print(alpha_homeo)
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, database=database,
learning_algorithm='omp', alpha_homeo=alpha_homeo, verbose=verbose)
dico = shl.learn_dico()
_ = shl.show_dico(dico, title='homeo - alpha_homeo={}'.format(alpha_homeo))
4. with different sparse coding algorithms¶
In [9]:
learning_algorithms = [
('Orthogonal Matching Pursuit 10 atom', 'OMP1_N10',
{'learning_algorithm':'omp', 'transform_n_nonzero_coefs': 10}),
('Orthogonal Matching Pursuit alpha 0.9', 'OMP_tol',
{'learning_algorithm':'omp', 'alpha': .9}),
('Least-angle regression 5 atoms', 'LARS',
{'learning_algorithm':'lars', 'transform_n_nonzero_coefs': 5}),
('Least-angle regression 5 atoms', 'LARS',
{'learning_algorithm':'lasso_lars', 'transform_n_nonzero_coefs': 5}),
('Least-angle regression 5 atoms', 'LARS',
{'learning_algorithm':'lasso_cd', 'transform_n_nonzero_coefs': 5})]
for learning_title, learning_label, learning_kwargs in learning_algorithms:
print('Dictionary learned from image patches using ' + learning_title)
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, database=database,
verbose=verbose, **learning_kwargs)
_ = shl.show_dico(shl.learn_dico(), title=learning_label)
Version used¶
In [7]:
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, shl_scripts
Out[7]: