Extending the high-phi illusion

The High Phi Motion illusion, is the illusory perception of a strong shift of motion induced by a slow inducing motion. A demo page is available on the min author's webpage and the effect is described in this excellent paper : Wexler M, Glennerster A, Cavanagh P, Ito H & Seno T (2013). Default perception of high-speed motion. PNAS, 110, 7080-7085. http://wexler.free.fr/papers/highphi.pdf

In this notebook, we will generate an extension of this illusion to answer to the question of knowing if it limited to the one-dimensional motion along the ring or if this can extended to arbitrary, 2D, planar motions. This will help decipher some of the factors leading to this "illusion".

TL;DR : one can reproduce the illusion on a planar motion (not a rotation), but it seems important that the motion is either limited to a band-like shape or to limited orientations:

In [28]:
from IPython.display import Video

prefix = '2025-06-09_extending-the-high-phi-illusion'
Video(f'../files/{prefix}/high-phi.mp4', html_attributes="loop=True autoplay=True  controls=True")
Out[28]:
In [29]:
Video(f'../files/{prefix}/high-phi-oriented-inducer-noband.mp4', html_attributes="loop=True autoplay=True  controls=True")
Out[29]:

Let's first initialize the notebook:

In [3]:
import os
import numpy as np
import matplotlib.pyplot as plt

creating a textured motion

As a generic visual texture, let's synthetize a Motion Clouds for the inducer:

In [4]:
# %pip install MotionClouds
In [5]:
import MotionClouds as mc
mc.figpath = os.path.join('../files/', prefix)
os.makedirs(mc.figpath, exist_ok=True)

image_size_az, image_size_el, N_frame = 400, 400, 80
do_mask = True
N_frame_inducer = 60

V = 4.
B_theta_shuffle = np.inf
B_theta_inducer = np.inf
fps = 16
fx, fy, ft = mc.get_grids(image_size_az, image_size_el, N_frame)
In [7]:
name = 'inducer'

opts = dict(sf_0=0.025, B_sf=0.015, B_theta=B_theta_inducer)
env = mc.envelope_gabor(fx, fy, ft, V_X=V, **opts)

mc.figures(env, name, do_figs=False, figpath=mc.figpath, do_mask=do_mask, verbose=True)
mc.in_show_video(name, figpath=mc.figpath)
Before Rectification of the frames
Mean= -2.791230366480235e-09 , std= 3.780516432905623e-05 , Min= -0.00024022903127231458 , Max= 0.0002218101417629138  Abs(Max)= 0.00024022903127231458
After Rectification of the frames
Mean= 0.5 , std= 0.07868658378546557 , Min= 0.0 , Max= 0.9616750712881064
percentage pixels clipped= 0.0

This can be accessed as a numpy array:

In [8]:
movie_inducer = mc.rectif(mc.random_cloud(env, do_mask=do_mask))[:, :, :N_frame_inducer]
print(f'movie_inducer shape = {movie_inducer.shape}')
movie_inducer shape = (400, 400, 60)

On the first two axis, the spatial axis of pixels ($x$ and $y$), on the third the temporal axis $t$.

shuffled movie

This corresponds in Fourier space to a white noise in time and can be parameterized by an infinite bandwidth on the temporal frequency axis:

In [9]:
name = 'shuffle'
N_frame_shuffle = 5
fx, fy, ft = mc.get_grids(image_size_az, image_size_el, N_frame)
B_V = np.inf
B_V = 1.*V
opts = dict(sf_0=0.025, B_sf=0.015, B_theta=B_theta_shuffle)
env = mc.envelope_gabor(fx, fy, ft, V_X=0., V_Y=0., B_V=B_V, **opts)

mc.figures(env, name, do_figs=False, figpath=mc.figpath, do_mask=do_mask)
mc.in_show_video(name, figpath=mc.figpath)

Similarly, we get a movie:

In [10]:
movie_shuffle = mc.rectif(mc.random_cloud(env, do_mask=do_mask))[:, :, :N_frame_shuffle]
print(f'movie_shuffle shape = {movie_shuffle.shape}')
movie_shuffle shape = (400, 400, 5)

we can now use these arrays and concatenate them:

In [11]:
name = 'high-phi-concatenated'

N_frame_blank = 15
movie_blank = np.ones((image_size_az, image_size_el, N_frame_blank)) * 0.5

movie_inducer_east = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=V, V_Y=0, **opts), do_mask=do_mask))[:, :, :N_frame_inducer]
movie_inducer_west = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=-V, V_Y=0, **opts), do_mask=do_mask))[:, :, :N_frame_inducer]
movie_inducer_north = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=0, V_Y=V, **opts), do_mask=do_mask))[:, :, :N_frame_inducer]
movie_inducer_south = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=0, V_Y=-V, **opts), do_mask=do_mask))[:, :, :N_frame_inducer]

movie_highphi = np.concatenate((movie_inducer_east, movie_shuffle, movie_blank, movie_inducer_north, movie_shuffle, movie_blank, 
                                movie_inducer_west, movie_shuffle, movie_blank, movie_inducer_south, movie_shuffle, movie_blank), axis=-1)
mc.anim_save(movie_highphi, os.path.join(mc.figpath, name), figpath=mc.figpath, verbose=False)
mc.in_show_video(name, figpath=mc.figpath)

Surprisingly, the effect is not as striking as the 1D motion produced in the initial illusion. So let's add a mask to check if the form of the moving texture is one important factor:

In [12]:
do_band_inducer = True
do_band_shuffle = True
band_radius = 0.05


name = 'high-phi-band-shuffle'

movie_band =  (np.abs(fy) < band_radius)
movie_highphi = np.concatenate((movie_inducer_east, (movie_shuffle-.5)*movie_band[:, :, :N_frame_shuffle]+.5), axis=-1)
mc.anim_save(movie_highphi, os.path.join(mc.figpath, name), figpath=mc.figpath, verbose=False)
mc.in_show_video(name, figpath=mc.figpath)
In [13]:
name = 'high-phi-band'

movie_highphi = np.concatenate(( ((movie_inducer_east-.5)*movie_band[:, :, :N_frame_inducer]+.5), (movie_shuffle-.5)*movie_band[:, :, :N_frame_shuffle]+.5), axis=-1)
mc.anim_save(movie_highphi, os.path.join(mc.figpath, name), figpath=mc.figpath, verbose=False)
mc.in_show_video(name, figpath=mc.figpath)

wrapping up and make a movie

Now that we have all elements, let's wrap them up in a single function and export the result as a

In [14]:
def make_shots(figname, 
               N_frame_inducer=N_frame_inducer, N_frame_shuffle=N_frame_shuffle, N_frame_blank=N_frame_blank, V=V, 
               image_size_az=image_size_az, image_size_el=image_size_el, N_frame=N_frame, do_mask=do_mask, 
               do_band_inducer=do_band_inducer, do_band_shuffle=do_band_shuffle, band_radius=band_radius,
               sf_0=opts['sf_0'], B_theta_shuffle=B_theta_shuffle, B_theta_inducer=B_theta_inducer, theta=np.pi/4, B_sf=opts['B_sf'],
               fps = fps # frames per second
    ):


    fx, fy, ft = mc.get_grids(image_size_az, image_size_el, N_frame)

    opts = dict(sf_0=sf_0, B_theta=B_theta_shuffle, B_sf=B_sf)

    movie_shuffle_h = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, theta=theta, V_X=0., V_Y=0., B_V=B_V, **opts), do_mask=do_mask))[:, :, :N_frame_shuffle]
    movie_shuffle_v = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, theta=theta+np.pi/2, V_X=0., V_Y=0., B_V=B_V, **opts), do_mask=do_mask))[:, :, :N_frame_shuffle]
    if do_band_shuffle:
        movie_band = (np.abs(fy) < band_radius)[:, :, :N_frame_shuffle]
        movie_shuffle_h = (movie_shuffle_h - 0.5) * movie_band + 0.5

        movie_band = (np.abs(fx) < band_radius)[:, :, :N_frame_shuffle]
        movie_shuffle_v = (movie_shuffle_v - 0.5) * movie_band + 0.5

    movie_blank = np.ones((image_size_az, image_size_el, N_frame_blank)) * 0.5

    opts.update(B_theta=B_theta_inducer)
    movie_inducer_east = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=V, V_Y=0, theta=0.,  **opts), do_mask=do_mask))[:, :, :N_frame_inducer]
    movie_inducer_west = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=-V, V_Y=0, theta=0., **opts), do_mask=do_mask))[:, :, :N_frame_inducer]
    movie_inducer_north = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=0, V_Y=V, theta=np.pi/2, **opts), do_mask=do_mask))[:, :, :N_frame_inducer]
    movie_inducer_south = mc.rectif(mc.random_cloud(mc.envelope_gabor(fx, fy, ft, V_X=0, V_Y=-V, theta=np.pi/2, **opts), do_mask=do_mask))[:, :, :N_frame_inducer]

    if do_band_inducer:
        movie_band = (np.abs(fy) < band_radius)[:, :, :N_frame_inducer]
        movie_inducer_east = (movie_inducer_east - 0.5) * movie_band + 0.5
        movie_inducer_west = (movie_inducer_west - 0.5) * movie_band + 0.5
        movie_band = (np.abs(fx) < band_radius)[:, :, :N_frame_inducer]
        movie_inducer_south = (movie_inducer_south - 0.5) * movie_band + 0.5
        movie_inducer_north = (movie_inducer_north - 0.5) * movie_band + 0.5

    movie_highphi = np.concatenate((movie_inducer_east, movie_shuffle_h, movie_blank, movie_inducer_north, movie_shuffle_v, movie_blank, 
                                    movie_inducer_west, movie_shuffle_h, movie_blank, movie_inducer_south, movie_shuffle_v, movie_blank), axis=-1)
    
    fname = os.path.join(mc.figpath, figname)
    mc.anim_save(movie_highphi, fname, figpath=mc.figpath, fps=fps, verbose=False)
    return fname + mc.vext # returns filename


figname = 'high-phi'
fname = make_shots(figname)
mc.in_show_video(figname, figpath=mc.figpath)

This function allows us to test different configurations.

What if the inducer is short in time ?

In [15]:
figname = 'high-phi-short-inducer'
fname = make_shots(figname, N_frame_shuffle=5, N_frame_inducer=10, N_frame_blank=65)
mc.in_show_video(figname, figpath=mc.figpath)

What if the inducer's speed is slower ?

In [16]:
figname = 'high-phi-slow'
fname = make_shots(figname, V=1.)
mc.in_show_video(figname, figpath=mc.figpath)

What if the inducer is long in time but the shuffle is long ?

In [17]:
figname = 'high-phi-short-shuffle'
fname = make_shots(figname, N_frame_shuffle=5, N_frame_inducer=65, N_frame_blank=10)
mc.in_show_video(figname, figpath=mc.figpath)

What if both are short in time ?

In [18]:
figname = 'high-phi-short'
fname = make_shots(figname, N_frame_shuffle=5, N_frame_inducer=5, N_frame_blank=70)
mc.in_show_video(figname, figpath=mc.figpath)

What if both are long in time ?

In [19]:
figname = 'high-phi-long'
fname = make_shots(figname, N_frame_shuffle=40, N_frame_inducer=40, N_frame_blank=0)
mc.in_show_video(figname, figpath=mc.figpath)

What if the inducer contains oriented textures ?

In [24]:
figname = 'high-phi-oriented-inducer'
fname = make_shots(figname, theta=0., B_theta_inducer=.2)
mc.in_show_video(figname, figpath=mc.figpath)
In [25]:
figname = 'high-phi-oriented-inducer-shuffle'
fname = make_shots(figname, theta=0., B_theta_shuffle=.2, B_theta_inducer=.2)
mc.in_show_video(figname, figpath=mc.figpath)
In [26]:
figname = 'high-phi-oriented-inducer-noband'
fname = make_shots(figname, theta=0., B_theta_shuffle=.2, B_theta_inducer=.2, do_band_inducer=False, do_band_shuffle=False)
mc.in_show_video(figname, figpath=mc.figpath)

and that these orientation is tilted at 45° ?

In [20]:
figname = 'high-phi-diagonal'
fname = make_shots(figname, B_theta_shuffle=.2)
mc.in_show_video(figname, figpath=mc.figpath)
In [21]:
figname = 'high-phi-diagonal-inducer'
fname = make_shots(figname, B_theta_inducer=.2)
mc.in_show_video(figname, figpath=mc.figpath)
In [22]:
figname = 'high-phi-diagonal-both'
fname = make_shots(figname, B_theta_shuffle=.2, B_theta_inducer=.2)
mc.in_show_video(figname, figpath=mc.figpath)

some book keeping for the notebook

In [23]:
%load_ext watermark
%watermark -i -h -m -v -p numpy,matplotlib,imageio  -r -g -b
Python implementation: CPython
Python version       : 3.13.5
IPython version      : 9.3.0

numpy     : 2.3.0
matplotlib: 3.10.3
imageio   : 2.37.0

Compiler    : Clang 17.0.0 (clang-1700.0.13.3)
OS          : Darwin
Release     : 24.5.0
Machine     : x86_64
Processor   : i386
CPU cores   : 36
Architecture: 64bit

Hostname: Ahsoka

Git hash: b97d7a23e35afe9d4ccf3f22e96d5aa986946004

Git repo: https://github.com/laurentperrinet/sciblog

Git branch: master

In [ ]:
# HACK
# %rm -f ../files/{prefix}/*mp4