# A textured Ouchi Illusion

The Ouchi illusion is a powerful demonstration that static images may produce an illusory movement. One striking aspect is that it makes you feel quite dizzy from trying to compensate for this illusory movement.

The illlusion is is generated by your own eye movements and is a consequence of the aperture problem, which is a fundamental problem in vision science. The aperture problem is the fact that the visual system can only integrate information along the direction of motion, and not perpendicular to it. This is because the visual system is made of a set of filters that are oriented in different directions, and the integration is done by summing the responses of these filters. The aperture problem is a problem because it means that the visual system cannot recover the direction of motion of a contour from the responses of these filters.

Here, we explore variations of this illusion which xwould use textures instead of regular angles using the MotionClouds library. The idea is to use the same texture in the two parts of the image (center vs surround), but to rotate by 90° the texture in the center:

Optimizing the parameters of the texture would help tell us what matters to generate that illusion...

Let's first initialize the notebook:

In [1]:
```import numpy as np
import matplotlib.pyplot as plt
fig_width = 10
figsize = (fig_width, fig_width)
```

### install and load the library¶

In [2]:
```%pip install MotionClouds
```
```Requirement already satisfied: MotionClouds in /opt/homebrew/lib/python3.11/site-packages (20220927)
Requirement already satisfied: numpy in /opt/homebrew/lib/python3.11/site-packages (from MotionClouds) (1.26.2)
Note: you may need to restart the kernel to use updated packages.
```

In particular, we just generate one frame:

In [3]:
```def sigmoid(x, # the input
slope=61.803, # this slope is inverse gold times 100
threshold=0.5 # the mean of x
):
# return (x > threshold).astype(np.float)
return 1 / (1 + np.exp(- slope * (x - threshold)))
```
In [4]:
```import MotionClouds as mc
seed = 123456789
N = 512
mc.N_X, mc.N_Y, mc.N_frame = N, N, 1
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
params = dict(theta=0, B_sf=.06, sf_0=.3, B_theta=.1)
params = dict(theta=np.pi/3, B_sf=.5, sf_0=.07, B_theta=.1)
params = dict(theta=0, B_sf=.2, sf_0=.2, B_theta=.25)
params = dict(theta=0, B_sf=.04, sf_0=.2, B_theta=.25)
env = mc.envelope_gabor(fx, fy, ft, **params)
image = mc.rectif(mc.random_cloud(env, seed=seed)).reshape((mc.N_X, mc.N_Y))
image = sigmoid(image)
```
In [5]:
```import matplotlib.pyplot as plt
for key in ['xtick.bottom', 'xtick.labelbottom', 'ytick.left', 'ytick.labelleft']: plt.rcParams[key] = False
%matplotlib inline
fig, ax = plt.subplots(figsize=figsize)
_ = ax.imshow(image, cmap=plt.gray())
```

### define a crop function¶

The library has a representation of space that we may take advantage of:

In [6]:
```fig, ax = plt.subplots(figsize=figsize)
_ = ax.imshow(fx, cmap=plt.gray())
print(fx.min(), fx.max())
```
```-0.5 0.498046875
```

We may easily define a central cropping mask:

In [7]:
```rho = .2
# mask = ((fx**2 + fy**2) < rho**2).squeeze()
mask = 1-sigmoid((fx**2 + fy**2) - rho**2, slope=1e3, threshold=0).squeeze()

fig, ax = plt.subplots(figsize=figsize)
```
```0.0 1.0 (512, 512)
```

From this we define a cropping function

In [8]:
```def crop_and_merge(image, mask, use_rot=True, use_fill=False, fill=.5):
N_X, N_Y = image.shape

image_fig = image.copy()
if use_rot: image_fig = np.rot90(image_fig)
image_fig = np.roll(image_fig, N_X//4 + int(N_X//2*np.random.rand()), axis=0 ) # roll over one axis
image_fig = np.roll(image_fig, N_Y//4 + int(N_Y//2*np.random.rand()), axis=1 ) # roll over one axis

if use_fill:
```fig, ax = plt.subplots(figsize=figsize)