accessing the data from a pupil recording
I am experimenting with the pupil eyetracker and could set it up (almost) smoothly on a macOS. There is an excellent documentation, and my first goal was to just record raw data and extract eye position.
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="https://laurentperrinet.github.io/sciblog/files/2017-12-13_pupil%20test_480.mp4" width=61.8%/></center>')
This video shows the world view (cranio-centric, from a head-mounted camera fixed on the frame) with overlaid the position of the (right) eye while I am configuring a text box. You see the eye fixating on the screen then jumping somewhere else on the screen (saccades) or on the keyboard / hands. Note that the screen itself shows the world view, such that this generates an self-reccurrent pattern.
For this, I could use the capture script and I will demonstrate here how to extract the raw data in a few lines of python code.
In particular, we will use existing examples and rely on the documentation of the pupil's data format. Let's first access the data:
import os
home = os.environ['HOME']
path = os.path.join(home, 'science/pupil/pupil/recordings/2017_12_13/003')
!ls {path}
We can have a look at the meta-data:
import pandas as pd
df = pd.read_csv(os.path.join(path, 'info.csv'))
df
Great. Raw data is stored in the pupil_data
file. To access it, we will use the following code:
import msgpack
msgpack is a good replacement for pickle, check out:
with open(os.path.join(path, 'pupil_data'), 'rb') as fh:
pupil_data = msgpack.unpack(fh, encoding='utf-8')
There is certainly a way to use directly
from file_methods import load_object l = load_object(os.path.join(path, 'pupil_data'))
In the meanwhile, we have all of our data:
pupil_data.keys()
print('number of recording points: ', len(pupil_data['gaze_positions']))
Let's scrutinize one gaze position:
one_gp = pupil_data['gaze_positions'][0]
one_gp
One may thus quickly extract data points as:
import numpy as np
gaze_data = np.array([(one_gp['timestamp'], one_gp['norm_pos'][0], one_gp['norm_pos'][1], one_gp['confidence']) for one_gp in pupil_data['gaze_positions']])
gaze_data[:, 0] -= gaze_data[:, 0].min()
To finish, let's plot that data as a function of time (with an evaluation of its confidence):
import matplotlib.pyplot as plt
%matplotlib inline
fig, axs = plt.subplots(3, 1, figsize=(15, 5))
for i in range(3):
axs[i].plot(gaze_data[:, 0], gaze_data[:, i+1])
axs[i].set_ylim(0, 1)
axs[0].set_ylabel('x')
axs[1].set_ylabel('y')
axs[2].set_ylabel('confidence')
axs[2].set_xlabel('time (s)');
... or in space:
fig, ax = plt.subplots(1, 1, figsize=(15, 8), subplot_kw=dict(facecolor='gray'))
ax.scatter(gaze_data[:, 1], gaze_data[:, 2], c=gaze_data[:, 0], s=gaze_data[:, -1]*5, alpha=.3)
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('y');
fig, ax = plt.subplots(1, 1, figsize=(15, 8), subplot_kw=dict(facecolor='gray'))
ax.scatter(gaze_data[:, 1], gaze_data[:, 2], c=gaze_data[:, 0], s=gaze_data[:, -1]*5, alpha=.3)
ax.set_xlim(.4, .6)
ax.set_ylim(.4, .6)
ax.set_xlabel('x')
ax.set_ylabel('y');