Motion-based prediction model for flash lag effect

Abstract

The flash lag effect (FLE) is a well known visual illusion that reveals the perceptual difference in position coding of moving and stationary flashed objects. It has been reproduced experimentally in retina and V1 along with some relevant evidences about motion based position coding in areas MT and MT+. Numerous hypotheses for mechanisms underlying FLE such as motion extrapolation, latency difference, position persistence, temporal averaging and postdiction have been under debate for last two decades. Here, we have challenged our previous motion-based prediction model to understand FLE, consistently with the motion extrapolation account proposed by Nijhawan. Our hypothesis is based on predictability of motion trajectory and importance of motion signal in manipulation of receptive field shape for moving objects. Using a probabilistic framework, we have implemented motion-based prediction (MBP) and simulated three different demonstrations of FLE including standard, flash initiated and flash terminated cycles. This method allowed us to compare the shape of the characteristic receptive fields for moving and stationary flashed dots in the case of rightward and leftward motions. As control model, we have eliminated velocity signal from motion estimation and simulated position-based (PX) model of FLE. Results of MBP model suggest that above a minimal time for duration of flash, the development of predictive component for the moving object is sufficient to shift in the direction of motion and to produce flash lag effect. MBP model reproduces experimental data of FLE and its dependence to the contrast of flash. Against what has been argued as shortage of motion extrapolation account, in our results spatial lead of moving object is also evident in flash initiated cycle. Our model, without being restricted to one special visual area, provides a generic account for FLE by emphasize on different manipulation of stationary objects and trajectory motion by the sensory system.

Publication
Journal of Vision
Mina A Khoei
Mina A Khoei
Senior AI/ML scientist @ SynSense, Zurich, Switzerland.

Phd in Computational Neuroscience

Laurent U Perrinet
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.