# Reproducing Olshausen's classical SparseNet (part 3)

In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robustness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:

• first, whatever the learning rate, the convergence is not complete without homeostasis,
• second, we achieve better convergence for similar learning rates and on a certain range of learning rates for the homeostasis
• third, the smoothing parameter `alpha_homeo` has to be properly set to achieve a good convergence.
• last, this homeostatic rule works with the different variants of sparse coding.

This is joint work with Victor Boutin.

In :
```%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=2, suppress=True)
```
In :
```from shl_scripts import SHL
list_figures = ['show_dico', 'plot_variance',  'plot_variance_histogram',  'time_plot_prob',  'time_plot_kurt',  'time_plot_var']

DEBUG_DOWNSCALE, verbose = 10, 100
DEBUG_DOWNSCALE, verbose = 10, 10
DEBUG_DOWNSCALE, verbose = 1, 0

N_scan = 7
database = 'database/'

shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, verbose=verbose)
data = shl.get_data()
```

### 1. With diferent learning rates but without homeostasis¶

Here,we only ensure the norm ofthe filters is constant.

In :
```shl = SHL()
data = shl.get_data()
```
for eta in np.logspace(-1, 1, N_scan, base=10)*shl.eta: matname = 'no homeo - eta={}'.format(eta) shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, eta_homeo=0, eta=eta, verbose=verbose) dico = shl.learn_dico(data=data, matname=matname, list_figures=list_figures)

### 2. Homeostasis à-la-SparseNet¶

In :
```shl = SHL()
data = shl.get_data()
for eta in np.logspace(-1, 1, N_scan, base=10)*shl.eta:
matname = 'homeo - eta={}'.format(eta)
shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, eta=eta, verbose=verbose)
dico = shl.learn_dico(data=data, matname=matname, list_figures=list_figures)
```         