An open-source vision-science tool for the auto-regressive generation of dynamic stochastic textures Motion Clouds

Abstract

Motion Clouds are a generative model for naturalistic visual stimulation that offer full parametric control and more naturalism than the widely used alternatives of Random Dot Kinematograms (RDKs) or luminance gratings. We previously released an 3D FFT-based generation algorithm (Sanz-Leon et al., J Neurophysiol, 2012). Here, we present a novel implementation of motion clouds that uses an Auto-Regressive formulation so that any number of frames can be generated quickly with parameters changed in near real time, as needed in closed loop experiments. We demonstrate a version of the proposed toolbox that will be available online to illustrate the level of control available. With a graphic user interface, researchers can use interactive sliders to adjust motion cloud parameters like central frequency, orientations and bandwidths to get an intuitive feel for the parametric changes. We provide functions that can be easily integrated with psychophysics task tools like Psychtoolbox. Motion clouds can be used to generate trials of stand-alone moving luminance textures or added to other stimuli like images or videos as dynamic noise to disrupt visual processing. The toolbox can be run using GPUs to speed up generation to pseudo real-time for large stimulus arrays of about 1024 by 1024 pixels at 100Hz. We argue that this tool can enhance visual perception experiments in a range of contexts and would like it to be open to extensive testing, use and further development by the psychophysics, computational modelling, functional imaging and neurophysiology communities.

Publication
European Conference on Visual Perception 2024
Andrew Isaac Meso
Andrew Isaac Meso
Lecturer, King’s College London (IOPPN).
Jonathan Vacher
Jonathan Vacher
Maître de Conférence (Associate Professor) @ MAP5, Université Paris-Cité.