Aleatoric uncertainty characterizes the variability of features found in natural images, and echoes the epistemic uncertainty ubiquitously found in computer vision models. We explore this ‘‘uncertainty in, uncertainty out’’ relationship by generating convolutional sparse coding dictionaries with parametric epistemic uncertainty. This improves sparseness, resilience and reconstruction of natural images by providing the model a way to explicitly represent the aleatoric uncertainty of its input. We demonstrate how hierarchical processing can make use of this scheme by training a deep convolutional neural network to classify a sparse-coded CIFAR10 dataset, showing that encoding uncertainty in a sparse code is as efficient as using conventional images, with additional beneficial computational properties. Overall, this work empirically demonstrates the advantage of partitioning epistemic uncertainty in sparse coding algorithms.
Accepted paper (poster) at the ICLR 2023 Workshop on Sparsity in Neural Networks:
the focus of the WS is on “On practical limitations and tradeoffs between sustainability and efficiency” in Kigali, Rwanda / May 5th 2023
reviews will be made public on https://openreview.net/forum?id=tgr8FEcl28M
In a nutshell: We found that sparse coding of images (here extended in a convolutional framework) is improved when using kernels with heterogeneous precision in how they encode orientation information. This was confirmed by learning, but also by comparison with what is observed in the statistics of natural images and in our recordings from neurons in primary visual cortex.
This theoretical work accompanies a similar study in neurophysiology:
(2023).This work was extended in