Speed uncertainty and motion perception with naturalistic random textures

Abstract

It is still not fully understood how visual system integrates motion energy across different spatial and temporal frequencies to build a coherent percept of the global motion under the complex, noisy naturalistic conditions. We addressed this question by manipulating local speed variability distribution (i. e. speed bandwidth) using a well-controlled class of broadband random-texture stimuli called Motion Clouds (MCs) with continuous naturalistic spatiotemporal frequency spectra (Sanz-Leon et al., 2012, ; Simoncini et al., 2012). In a first 2AFC experiment on speed discrimination, participants had to compare the speed of a broad speed bandwidth MC (range: 0.05-8$,^∘$/s) moving at 1 of 5 possible mean speeds (ranging from 5 to 13 $,^∘$/s) to that of another MC with a small speed bandwidth (SD: 0.05 $,^∘$/s), always moving at a mean speed of 10$,^∘$/s . We found that MCs with larger speed bandwidth (between 0.05-0.5$,^∘$/s) were perceived moving faster. Within this range, speed uncertainty results in over-estimating stimulus velocity. However, beyond a critical bandwidth (SD: 0.5 $,^∘$/s), perception of a coherent speed was lost. In a second 2AFC experiment on direction discrimination, participants had to estimate the motion direction of moving MCs with different speed bandwidths. We found that for large band MCs participant could no longer discriminate motion direction. These results suggest that when increasing speed bandwidth from small to large range, the observer experiences different perceptual regimes. We then decided to run a Maximum Likelihood Difference Scaling (Knoblauch & Maloney, 2008) experiment with our speed bandwidth stimuli to investigate these different possible perceptual regimes. We identified three regimes within this space that correspond to motion coherency, motion transparency and motion incoherency. These results allow to further characterize the shape of the interactions kernel observed between different speed tuned channels and different spatiotemporal scales (Gekas et al ., 2017) that underlies global velocity estimation.

Publication
Journal of Vision, Vol.18, 345, proceedings of VSS
Kiana Mansour-Pour
Kiana Mansour-Pour
Executive DirectorExecutive Director, Shotise

During my PhD, I focused on smooth eye movements.

Laurent U Perrinet
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.