A dynamic model for decoding direction and orientation in macaque primary visual cortex


When objects are in motion, the local orientation of their contours and the direction of motion are two essential components of visual information which are processed in parallel in the early visual areas. Generally, to probe a neuron’s response property to moving stimuli, bars or gratings are drifted across neuron’s receptive field at various angles. The resulting tuning curve will reflect the "confound" selectivity to both the orientation and direction of motion orthogonal to the orientation. Focusing on the primary visual cortex of the macaque monkey (V1), we challenged different models for the joint representation of orientation and direction within the neural activity. Precisely, we considered the response of V1 neurons to an oriented moving bar to investigate whether, and how, the information about the bar’s orientation and direction could be encoded dynamically at the population activity level. For that purpose, we used a decoding approach based on a space-time receptive field model that encodes jointly orientation and direction. Then, using this model and a maximum likelihood paradigm, we inferred the most likely representation for a given network activity [1, 2]. We tested this model on surrogate data and on extracellular recordings in area V1 of awake macaque monkeys in response to oriented bars moving in 12 different directions. Using a cross-validation method we could robustly decode both the orientation and the direction of the bar within the classical receptive field (cRF). Furthermore, this decoding approach shows different properties: First, information about the orientation and direction of the bar is emerging before entering the cRF. Second, when testing different orientations with the same direction, our approach unravels that we can ``unconfound'' the information about direction and orientation by decoding them independently. Finally, our results demonstrate that the orientation and the direction of motion of an ambiguous moving bar can be progressively decoded in V1. This is a signature of a dynamic solution to the aperture problem in area V1, similarly to what was already found in area MT [3]. [1] M. Jazayeri and J.A. Movshon. Optimal representation of sensory information by neural populations. Nature Neuroscience, 9(5):690–696, 2006. [2] W. Taouali, G. Benvenuti, P. Wallisch, F. Chavane, L. Perrinet. Testing the Odds of Inherent versus Observed Over-dispersion in Neural Spike Counts. Journal of Neurophysiology, 2015. [3] C. Pack, R. Born. Temporal dynamics of a neural solution to the aperture problem in visual area MT of macaque brain. Nature, 409(6823), 1040–1042. 2001.

Proceedings of NCCD, Capbreton
Wahiba Taouali
Wahiba Taouali
PostDoc in Computational Neuroscience

Motion Integration By V1 Population (Post-Doc, 2013-03 / 2015-01).

Laurent U Perrinet
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.