Improving calls to the LogGabor library
To code image as edges, for instance in the SparseEdges
sparse coding scheme, we use a model of edges in images. A good model for these edges are bidimensional Log Gabor filter. This is implemented for instance in the LogGabor
library. The library was designed to be precise, but not particularly for efficiency. In order to improve its speed, we demonstrate here the use of a cache to avoid redundant computations.
Let's first initialize the notebook:
from __future__ import division, print_function
import numpy as np
np.set_printoptions(precision=6, suppress=True)
%load_ext autoreload
%autoreload 2
%load_ext line_profiler
timing of the library without cache¶
Let's make calls to the library and record the wall clock timing:
from LogGabor import LogGabor
lg = LogGabor('https://raw.githubusercontent.com/bicv/SparseEdges/master/default_param.py')
lg.pe.use_cache = False
lg.pe.verbose = 100
lg.init()
%%timeit
edge = [3*lg.pe.N_X/4, lg.pe.N_Y/2, 2, 2]
FT_lg = lg.loggabor(edge[0], edge[1], sf_0=lg.sf_0[edge[3]], B_sf=lg.pe.B_sf, theta=lg.theta[edge[2]], B_theta=lg.pe.B_theta)
Note that most of the time, we compute the filter at the origin and that whenever it is the case we avoid performing the translation. This makes the call systematically faster:
%%timeit
edge = [0., 0., 2, 2]
FT_lg = lg.loggabor(edge[0], edge[1], sf_0=lg.sf_0[edge[3]], B_sf=lg.pe.B_sf, theta=lg.theta[edge[2]], B_theta=lg.pe.B_theta)
Using a cache¶
We will use the fact that the many calls to the logGabor library repeat the same operation. We can cache the computed matrices instead of repeating the operation. In particular, we will take advantage of using scales (bands) and orientation separately, the multiplication being rapid in numpy.
lg = LogGabor('https://raw.githubusercontent.com/bicv/SparseEdges/master/default_param.py')
lg.pe.use_cache = True
lg.pe.verbose = 100
lg.init()
print ('Dictionary that will contain the matrices=', lg.cache)
In the beginning, the cache is empty but every time with compute one matrix, it gets filled up:
edge = [0., 0., 2, 2]
FT_lg = lg.loggabor(edge[0], edge[1], sf_0=lg.sf_0[edge[3]], B_sf=lg.pe.B_sf, theta=lg.theta[edge[2]], B_theta=lg.pe.B_theta)
print ('Dictionary that contains the matrices=', lg.cache)
%%timeit
edge = [3*lg.pe.N_X/4, lg.pe.N_Y/2, 2, 2]
FT_lg = lg.loggabor(edge[0], edge[1], sf_0=lg.sf_0[edge[3]], B_sf=lg.pe.B_sf, theta=lg.theta[edge[2]], B_theta=lg.pe.B_theta)
%%timeit
edge = [0., 0., 2, 2]
FT_lg = lg.loggabor(edge[0], edge[1], sf_0=lg.sf_0[edge[3]], B_sf=lg.pe.B_sf, theta=lg.theta[edge[2]], B_theta=lg.pe.B_theta)
That's a great improvement! Let's now apply that to the Matching Pursuit algorithm implemented in the SparseEdges
library:
application to SparseEdges¶
from SparseEdges import SparseEdges
mp = SparseEdges('https://raw.githubusercontent.com/bicv/SparseEdges/master/default_param.py')
mp.pe.N = 32 # number of edges
mp.pe.use_cache = False
mp.init()
# defining a test image
image = np.zeros((mp.pe.N_X, mp.pe.N_Y))
image[mp.pe.N_X//2:mp.pe.N_X//2+mp.pe.N_X//4, mp.pe.N_X//2:mp.pe.N_X//2+mp.pe.N_X//4] = 1
image[mp.pe.N_X//2:mp.pe.N_X//2+mp.pe.N_X//4, mp.pe.N_X//4:mp.pe.N_X//2] = -1
%%timeit -n1 -r1
edges, C_res = mp.run_mp(image, verbose=False)
from SparseEdges import SparseEdges
mp = SparseEdges('https://raw.githubusercontent.com/bicv/SparseEdges/master/default_param.py')
mp.pe.N = 32 # number of edges
mp.pe.use_cache = True
mp.init()
# defining a test image
image = np.zeros((mp.pe.N_X, mp.pe.N_Y))
image[mp.pe.N_X//2:mp.pe.N_X//2+mp.pe.N_X//4, mp.pe.N_X//2:mp.pe.N_X//2+mp.pe.N_X//4] = 1
image[mp.pe.N_X//2:mp.pe.N_X//2+mp.pe.N_X//4, mp.pe.N_X//4:mp.pe.N_X//2] = -1
%%timeit -n1 -r1
edges, C_res = mp.run_mp(image, verbose=False)
Which shows a performance gain of approximately 25%. These changes are now effective in the code (see this commit).
Further profiling shows that most of the time is spend in the backprop
function:
some book keeping for the notebook¶
%load_ext watermark
%watermark
%load_ext version_information
%version_information numpy, scipy, matplotlib, sympy, pillow, imageio