Top-down feedback in Hierarchical Sparse Coding

Abstract

From a computer science perspective, the problem of optimal representation using Hierarchical Sparse Coding (HSC) is often solved using a stack of independent subproblems with for instance the Lasso formulation at each layer. However, recent neuroscientific evidence would suggest inter-connecting these subproblems as in the Predictive Coding (PC) theory, which adds top-down, feedback connections between consecutive layers. In this study, we assess the impact of this inter-layer feedback connection by comparing Sparse Deep Predictive Coding (SDPC) model with a Hierarchical Lasso (Hi-La) network made out of a sequence of Lasso layers. We train a 2-layered network on 3 different databases using SDPC or Hi-La, and we vary the sparsity in each layer. We first demonstrate that the inference stage of the SDPC is faster to converge than the Hi-La model. Then, we show that surprisingly, despite adding this novel feedback constraint in HSC, the overall prediction error generated by SDPC is lower. The analysis of the distribution of prediction errors between layers for each model reveals a mechanism in the SDPC that mitigates the increase of the prediction error when increasing sparsity. Finally, we discuss the qualitative difference between both model dictionaries and we observe that the SDPC features are more generic and may include more contextual information.

Publication
Submitted
Avatar
Victor Boutin
Phd candidate in Computational Neuroscience

During my PhD, I focused on predictive coding in a bio-inspired neural network.

Related