The visual systems of animals work in diverse and constantly changing environments where organism survival requires effective senses. To study the hierarchical brain networks that perform visual information processing, vision scientists require suitable tools, and Motion Clouds (MCs)—a dense mixture of drifting Gabor textons—serve as a versatile solution. Here, we present an open toolbox intended for the bespoke use of MC functions and objects within modeling or experimental psychophysics contexts, including easy integration within Psychtoolbox or PsychoPy environments. The toolbox includes output visualization via a Graphic User Interface. Visualizations of parameter changes in real time give users an intuitive feel for adjustments to texture features like orientation, spatiotemporal frequencies, bandwidth, and speed. Vector calculus tools serve the frame-by-frame autoregressive generation of fully controlled stimuli, and use of the GPU allows this to be done in real time for typical stimulus array sizes. We give illustrative examples of experimental use to highlight the potential with both simple and composite stimuli. The toolbox is developed for, and by, researchers interested in psychophysics, visual neurophysiology, and mathematical and computational models. We argue the case that in all these fields, MCs can bridge the gap between well- parameterized synthetic stimuli like dots or gratings and more complex and less controlled natural videos.
🚀 Excited to share our new paper:
“DynTex: A real-time generative model of dynamic naturalistic luminance textures”
…now published in Journal of Vision!
🔹 Why it matters: Dynamic textures (e.g., fire, water, foliage) are everywhere, but modeling them in real-time has been a challenge. DynTex bridges this gap with a biologically inspired, efficient approach.
🔹 Key innovation: A generative model that captures the spatiotemporal statistics of natural scenes while running in real-time.
🔹 Applications: Computer vision, neuroscience, VR/AR, and more.📖
Read it here: https://doi.org/10.1167/jov.25.11.2
More on: https://laurentperrinet.github.io/publication/meso-25/
#DynamicTextures #ComputationalNeuroscience #ComputerVision #GenerativeModels #OpenScience
The Motion Clouds stimuli were originally presented in the following paper (page links to other sresources)
(2012).examples of use: https://laurentperrinet.github.io/sciblog/categories/motionclouds.html