motion-clouds

Speed-Selectivity in Retinal Ganglion Cells is Sharpened by Broad Spatial Frequency, Naturalistic Stimuli

Motion detection represents one of the critical tasks of the visual system and has motivated a large body of research. However, it remains unclear precisely why the response of retinal ganglion cells (RGCs) to simple artificial stimuli does not …

Bayesian Modeling of Motion Perception using Dynamical Stochastic Textures

A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The present study details the complete formulation of such a …

How the dynamics of human smooth pursuit is influenced by speed uncertainty

Voluntary tracking the moving clouds : Effects of speed variability on human smooth pursuit

The properties of motion processing for driving smooth eye movements have bee investigated using simple, artificial stimuli such as gratings, small dots or random dot patterns. Motion processing in the context of complex, natural images is less …

Motion Clouds

**MotionClouds** are random dynamic stimuli optimized to study motion perception.

Voluntary tracking the moving clouds : Effects of speed variability on human smooth pursuit

Voluntary tracking the moving clouds : Effects of speed variability on human smooth pursuit

Voluntary tracking the moving clouds : Effects of speed variability on human smooth pursuit

A Mathematical Account of Dynamic Texture Synthesis for Probing Visual Perception

Biologically Inspired Dynamic Textures for Probing Motion Perception

Beyond simply faster and slower: exploring paradoxes in speed perception

Estimating object speed in visual scenes is a critical part of perception. While various aspects of speed computation including discrimination thresholds, neural mechanisms and spatial integration mechanisms have been studied, there remain areas to …

The characteristics of microsaccadic eye movements varied with the change of strategy in a match-to-sample task

Under natural viewing conditions, large eye movements are interspace by small eye movements (microsaccade). Recent works have shown that these two kinds of eye movements are generate by the same oculomotor mechanisms (Goffart et al., 2012) and are …

Dynamic Textures For Probing Motion Perception

This work extends the MotionClouds dynamic texture model testing aspects of its parametrization with an application in psychophysics.

How and why do image frequency properties influence perceived speed?

Humans are able to interact successfully with moving objects in our dynamic world and the visual system effi ciently performs the motion computation that makes this possible. Object speed and direction are estimated following the integration of …

Measuring speed of moving textures: Different pooling of motion information for human ocular following and perception

The visual system does not process information instantaneously, but rather integrates over time. Integration occurs both for stationary objects and moving objects, with very similar time constants (Burr, 1981). We measured, as a function of exposure …

Motion Clouds: Model-based stimulus synthesis of natural-like random textures for the study of motion perception

Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate …

Effect of image statistics on fixational eye movements

Under natural viewing conditions, small movements of the eyes prevent the maintenance of a steady direction of gaze. It is unclear how the spatiotemporal content of the fixated scene has an impact on the properties of miniatures, fixational eye …

Measuring speed of moving textures: Different pooling of motion information for human ocular following and perception.

To measure speed and direction of moving objects, the cortical motion system pools information across different spatiotemporal channels. One yet unsolved question is to understand how the brain pools this information and whether this pooling is …

More is not always better: dissociation between perception and action explained by adaptive gain control

Moving objects generate motion information at different scales, which are processed in the visual system with a bank of spatiotemporal frequency channels. It is not known how the brain pools this information to reconstruct object speed and whether …

Pattern discrimination for moving random textures: Richer stimuli are more difficult to recognize

In order to analyze the characteristics of a rich dynamic visual environment, the visual system must integrate information collected at different scales through different spatiotemporal frequency channels. Still, it remains unclear how reliable …

Pattern discrimination for moving random textures: Richer stimuli are more difficult to recognize

In order to analyze the characteristics of a rich dynamic visual environment, the visual system must integrate information collected at different scales through different spatiotemporal frequency channels. Still, it remains unclear how reliable …

Edge statistics in natural images versus laboratory animal environments: implications for understanding lateral connectivity in V1

Oriented edges in images of natural scenes tend to be aligned in collinear or co-circular arrangements, with lines and smooth curves more common than other possible arrangements of edges (Geisler et al., Vis Res 41:711-24, 2001). The visual system …

Role of homeostasis in learning sparse representations

Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect …

Different pooling of motion information for perceptual speed discrimination and behavioral speed estimation

Dynamics of distributed 1D and 2D motion representations for short-latency ocular following

Integrating information is essential to measure the physical 2D motion of a surface from both ambiguous local 1D motion of its elongated edges and non-ambiguous 2D motion of its features such as corners or texture elements. The dynamics of this …

Dynamical Neural Networks: modeling low-level vision at short latencies

The machinery behind the visual perception of motion and the subsequent sensori-motor transformation, such as in ocular following response (OFR), is confronted to uncertainties which are efficiently resolved in the primate's visual system. We may …

Sparse Approximation of Images Inspired from the Functional Architecture of the Primary Visual Areas