Convolutional Sparse Coding is improved by heterogeneous uncertainty modeling

Abstract

Aleatoric uncertainty characterizes the variability of features found in natural images, and echoes the epistemic uncertainty ubiquitously found in computer vision models. We explore this ”uncertainty in, uncertainty out” relationship by generating convolutional sparse coding dictionaries with parametric epistemic uncertainty. This improves sparseness, resilience and reconstruction of natural images by providing the model a way to explicitly represent the aleatoric uncertainty of its input. We demonstrate how hierarchical processing can make use of this scheme by training a deep convolutional neural network to classify a sparse-coded CIFAR10 dataset, showing that encoding uncertainty in a sparse code is as efficient as using conventional images, with additional beneficial computational properties. Overall, this work empirically demonstrates the advantage of partitioning epistemic uncertainty in sparse coding algorithms.

Publication
ICLR 2023 SNN Workshop
  • Accepted paper (poster) at the ICLR 2023 Workshop on Sparsity in Neural Networks:

  • the focus of the WS is on “On practical limitations and tradeoffs between sustainability and efficiency” in Kigali, Rwanda / May 5th 2023

  • reviews will be made public on https://openreview.net/forum?id=tgr8FEcl28M

  • In a nutshell: We found that sparse coding of images (here extended in a convolutional framework) is improved when using kernels with heterogeneous precision in how they encode orientation information. This was confirmed by learning, but also by comparison with what is observed in the statistics of natural images and in our recordings from neurons in primary visual cortex.

Epistemic uncertainty in a CSC dictionary improves both sparseness and reconstruction performance. **(a)** Elements from dictionaries with fixed epistemic uncertainty before (green) and after dictionary learning (orange). **(b)** Elements from a dictionary with heterogeneous epistemic uncertainty before (blue) and after dictionary learning (purple). **(c)** Elements from a dictionary learned from scratch. **(d)** Distribution of the sparseness (top) and Peak Signal-to-Noise Ratio (PSNR, right) of the five dictionaries, shown as a scatter plot for each of the 600 images of the dataset (center). Median values are shown as dashed line on the histograms.
Epistemic uncertainty in a CSC dictionary improves both sparseness and reconstruction performance. (a) Elements from dictionaries with fixed epistemic uncertainty before (green) and after dictionary learning (orange). (b) Elements from a dictionary with heterogeneous epistemic uncertainty before (blue) and after dictionary learning (purple). (c) Elements from a dictionary learned from scratch. (d) Distribution of the sparseness (top) and Peak Signal-to-Noise Ratio (PSNR, right) of the five dictionaries, shown as a scatter plot for each of the 600 images of the dataset (center). Median values are shown as dashed line on the histograms.
Hugo Ladret
Hugo Ladret
Phd candidate in Computational Neuroscience

During my PhD, I am focusing on the role of precision in natural and artificial neural networks.

Laurent U Perrinet
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.