Feature vs global motion
As we see a visual scene, there is contribution of the motion of each of the objects that constitute the visual scene into detecting its global motion. In particular, it is debatable to know which weight individual features, such as small objects in the foreground, have into this computation compared to a dense texture-like stimulus, as that of the background for instance.
Here, we design a a stimulus where we control independently these two aspects of motions to titrate their relative contribution to the detection of motion.
Can you spot the motion ? Is it more going to the upper left or to the upper right?
(For a more controlled test, imagine you fixate on the center of the movie.)
Let's start with a texture-like stimulus (a Motion Cloud):
name = 'trajectory'
%matplotlib inline
import matplotlib.pyplot as plt
import os
import numpy as np
import MotionClouds as mc
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
disk = mc.frequency_radius(fx, fy, ft) < .5
mc.figpath = '../files/2018-11-29-feature-vs-global-motion'
if not(os.path.isdir(mc.figpath)): os.mkdir(mc.figpath)
Some default parameters for the textons used here:
opts = dict(sf_0=0.1, B_sf=0.02, B_theta=np.inf, B_V=2., V_Y=1.)
global_contrast = .5
Let's first define a dense, stationary noise with a single motion:
name_ = name + '_dense'
seed = 42
mc1 = mc.envelope_gabor(fx, fy, ft, V_X=-1., **opts)
if mc.check_if_anim_exist(name_, figpath=mc.figpath): mc.figures(mc1, name_, seed=seed, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
One can overlay this with a similar motion in the upper right direction, such that one obtains a texture generalizing a plaid stimulus, that we place in a disk to make all directions iso-probable:
name_ = name + '_plaid'
mc1 = mc.envelope_gabor(fx, fy, ft, V_X=-1., **opts)
movie1 = mc.rectif(mc.random_cloud(mc1, seed=seed))
mc2 = mc.envelope_gabor(fx, fy, ft, V_X=+1., **opts)
movie2 = mc.rectif(mc.random_cloud(mc2, seed=seed+1))
if mc.check_if_anim_exist(name_, figpath=mc.figpath):
mc.anim_save(mc.rectif(movie1+movie2, contrast=global_contrast)*disk + .5*(1-disk), os.path.join(mc.figpath, name_))
mc.in_show_video(name_, figpath=mc.figpath)
The information is distributed densely in space and time and the motion energy is distributed toward the 2 upper diagonals: the perceived motion is upward vertical, along the vector average. Note that (especially if you are a well trained psychophysician) you may perceive the 2 components of these motion. Do avoid having this transparent motion, one can lower the global_contrast
scalar.
defining features¶
It is also possible to define a texture where, instead of being dense, the number of "texton" used to generate the the texture is relatively sparse:
name_ = name + '_features'
rho = 1.e-3
events = np.random.normal(size=(mc.N_X, mc.N_Y, mc.N_frame))
events *= np.random.uniform(size=(mc.N_X, mc.N_Y, mc.N_frame)) < rho
mc2 = mc.envelope_gabor(fx, fy, ft, V_X=+1., **opts)
movie2 = mc.rectif(mc.random_cloud(mc2, seed=seed+1, events=events))
if mc.check_if_anim_exist(name_, figpath=mc.figpath):
mc.anim_save(movie2, os.path.join(mc.figpath, name_))
mc.in_show_video(name_, figpath=mc.figpath)
What if we now overlay these 2 components?
name_ = name + '_overlay_features'
if mc.check_if_anim_exist(name_, figpath=mc.figpath):
mc.anim_save(mc.rectif(movie1+movie2, contrast=global_contrast)*disk + .5*(1-disk), os.path.join(mc.figpath, name_))
mc.in_show_video(name_, figpath=mc.figpath)
We seem to perceive a motion which is mostly on the upper left: dense motion dominates - this seems quite logical as the energy of the features component is lower. Is there a way to weight down this component to obtain a "Point of Subjective Equality"
name_ = name + '_contrast_PSE'
contrast = .3
if mc.check_if_anim_exist(name_, figpath=mc.figpath):
mc.anim_save(mc.rectif(contrast*movie1+movie2, contrast=global_contrast)*disk + .5*(1-disk), os.path.join(mc.figpath, name_))
mc.in_show_video(name_, figpath=mc.figpath)
An interesting property would be to determine the contrast of PSE for different values for the sparseness of the feature component. (With the obvious result that it is $1$ for a sparseness of $1$, that is, for a dense feature component.)
changing the properties of individual features¶
It is now possible to do the same procedure to compare the relative weight of the properties of the textons in each components. In particular, can "longer" features ...
name_ = name + '_B_V_features'
B_V2 = .3 * opts['B_V']
opts_ = opts.copy()
opts_.update(B_V=B_V2)
mc2 = mc.envelope_gabor(fx, fy, ft, V_X=+1., **opts_)
movie2 = mc.rectif(mc.random_cloud(mc2, seed=seed+1, events=events))
if mc.check_if_anim_exist(name_, figpath=mc.figpath):
mc.anim_save(movie2, os.path.join(mc.figpath, name_))
mc.in_show_video(name_, figpath=mc.figpath)
...have a higher weight on the global motion?
name_ = name + '_B_V_PSE'
if mc.check_if_anim_exist(name_, figpath=mc.figpath):
mc.anim_save(mc.rectif(movie1+movie2, contrast=global_contrast)*disk + .5*(1-disk), os.path.join(mc.figpath, name_))
mc.in_show_video(name_, figpath=mc.figpath)
Similarly, it would be interesting to test the evolution of these parameters (contrast, sparseness, precision) to achieve the PSE.
One key aspect of local features is that they are overlaid on top of the background. This is easy to implement:
def overlay(movie1, movie2, rho, do_linear=False):
if do_linear:
return rho*movie1+(1-rho)*movie2
else:
movie1, movie2 = rho*(2.*movie1-1), 2.*movie2-1
movie = movie1 * (np.abs(movie1) > np.abs(movie2)) + movie2 * (np.abs(movie1) <= np.abs(movie2))
return .5 + .5*movie
name_ = name + '_B_V_PSE_NL'
if mc.check_if_anim_exist(name_, figpath=mc.figpath):
mc.anim_save(mc.rectif(overlay(movie1, movie2, rho=.7), contrast=global_contrast)*disk + .5*(1-disk), os.path.join(mc.figpath, name_))
mc.in_show_video(name_, figpath=mc.figpath)
Perceptually, it seems features would take a bit more time to generate, such that the initial direction of an eye that would be smoothly track the motion would first go on the top left and then upwards...
some book keeping for the notebook¶
%load_ext version_information
%version_information numpy, matplotlib, MotionClouds