Using singularity on slurm
In this post, I will disp)lay my configuration that I use to run Singularity on a SLURM cluster.
My use case is mostly using a recent version of pyTorch on GPUs.
In this post, I will disp)lay my configuration that I use to run Singularity on a SLURM cluster.
My use case is mostly using a recent version of pyTorch on GPUs.
The Ouchi illusion is a powerful demonstration that static images may produce an illusory movement. One striking aspect is that it makes you feel quite dizzy from trying to compensate for this illusory movement.
The illlusion is is generated by your own eye movements and is a consequence of the aperture problem, which is a fundamental problem in vision science. The aperture problem is the fact that the visual system can only integrate information along the direction of motion, and not perpendicular to it. This is because the visual system is made of a set of filters that are oriented in different directions, and the integration is done by summing the responses of these filters. The aperture problem is a problem because it means that the visual system cannot recover the direction of motion of a contour from the responses of these filters.
Here, we explore variations of this illusion which xwould use textures instead of regular angles using the MotionClouds library. The idea is to use the same texture in the two parts of the image (center vs surround), but to rotate by 90° the texture in the center:
Optimizing the parameters of the texture would help tell us what matters to generate that illusion...
Here, I try to simulate the patterns obtained on a sandy terrain. This is what is observed when the wind blows on the dry surface of a land with sand or also on the surface of the sea in the presence of currents.
As described in https://en.wikipedia.org/wiki/Dune,
When a sandy seabed is subject to wave action and the wave orbital motion is strong enough to move sand grains, ripples often appear. The ripples induced by wave action are called “wave ripples”; their characteristics being different from those of the ripples generated by steady flows. The most striking difference between wave ripple fields and current ripple fields is the regularity of the former. Indeed, regular long-crested wave ripple fields are often observed on tidal beaches from which the sea has withdrawn at low water (see figure 1).
A nice example is shown in http://www.coastalwiki.org/wiki/Wave_ripple_formation showing
Ripples observed at Sea Rim State Park, along the coast of east Texas close to the border with Louisiana (courtesy by Zoltan Sylvester).
An interesting aspect of that patterns is that they may occur at different scales, like taht example on the surface of the Mars planet:
Overview of large sand wave field and high-resolution difference map of two surveys approximately 21 hours apart illustrating both large-scale and small-scale sand wave migration and orientation. Migration is from right to left.
grid_sample
from pyTorch
¶
The grid_sample transform is a powerful function which allows to transform any input image into a new topology. It is notably used in Spatial Transformer Networks for instance to learn CNN to be invariant to affine transforms. We used it recently in a publication What You See Is What You Transform: Foveated Spatial Transformers as a Bio-Inspired Attention Mechanism by Ghassan Dabane et al.
The use of grid_sample
can b etedious and here, we show how to use it to create a log-polar transform of the image and create the following figure:
A picture (extract from the painting "The Ambassadors" by Hans Holbein the Younger can be represented on a regular grid represented by vertical (red) and horizontal (blue) lines. Retinotopy transforms this grid, and in particular the area representing the fovea (shaded gray) is over-represented. Applied to the original image of the portrait, the image is strongly distorted and represents more finally the parts under the axis of sight (here the mouth).
tl;dr : On essaie ici de deviner le transfert des voix entre les choix effectués entre deux scrutins d'un vote (ici les élections présidentielles 2022 en France) par une méthode d'apprentissage automatique.
Afin d'analyser les résultats des élections, par exemple les dernières élections présidentielles de 2022 en France, et de mieux comprendre la dynamique des choix de vote entre les différents groupes de population, il peut être utile d'utiliser des outils d'apprentissage automatique pour inférer des structures à première vue cachées dans la masse des données. En particulier, inspiré par cet article du Monde, on peut se poser la question de savoir si on peut extraire depuis les données brutes des élections une estimation du report de voix entre les choix de vote au premier tour et ceux qui sont effectués au deuxième tour.
Pour cela, parmi les outils mathématiques de l'apprentissage automatique, nous allons utiliser des probabilités. Cette théorie va nous permettre d'exprimer le fait que les résultats tels qu'ils sont obtenus peuvent présenter une variabilité mais que celle-ci réelle résulte de préférences de chaque individu dans la population votante. En particulier, on peut considérer que chaque individu va avoir une préférence, graduée entre $0=0\%$ et $1=100\%$ pour chacun des choix (candidats, nul, blanc, abstention) au premier et second tour. Ainsi, les votes effectués vont correspondre à la réalisation de ces préférences.
Bien sûr, le vote reste secret et on n'a pas accès au vote de chaque individu et encore moins à ses préférences. Mais comme chaque bureau de vote présente des variabilités liées au contexte local et qui fait que cette population locale a une préférence pour certains choix plutôt que d'autres, on peut considérer chaque bureau de vote comme une population individuelle pour lequel nous allons essayer de prédire les résultats du vote au deuxième tour. En exploitant les irrégularités locales, bureau de vote par bureau de vote, nous allons pouvoir prédire (le mieux possible) le report des votes individuel (de chaque individu tel qu'il passe d'un vote à un autre, par exemple de "Hidalgo" à "Macron"). Nous allons en particulier montrer une prédiction très correcte des données du second tour à partir de ceux du premier, montrant la plausibilité d'une telle hypothèse :
C'est à ma connaissance une contribution originale (jusqu'à ce qu'une bonne âme veuille bien me donner un lien vers une méthode existante similaire qui me permette de correctement la citer...) que nous allons exploiter ici. Cette prédiction, si elle est efficace (et on va montrer qu'elle est en moyenne correctement prédite avec moins de 2 points de pourcentages d'erreur près), peut donner une idée du transfert de vote entre les deux tours qui a lieu en fonction des préférences des votes de chaque individu.
Nous allons dans la suite montrer comment on peut estimer le pourcentage de chances d'exprimer une voix pour un candidat ou pour l'autre en fonction du choix qu'on a exprimé au premier tour:
Comme on le verra plus bas, ce tableau montre des tendances claires, par exemple que si on a voté "Macron", "Jadot", "Hidalgo" ou "Pécresse" au premier tour, alors on va certainement voter "Macron" au deuxième tour. Ces électeurs se montrent particulièrement consensuel et suivent le « pacte républicain » mise en place pour faire un "barrage" au Front National (en suivant le terme consacré). Il montre aussi que si on a voté "Le Pen" ou "Dupont-Aignan" au premier tour alors on va voter Le Pen au deuxième, un clair vote de suivi.
Connaissant les couleurs politiques d'autres candidats du premier tour, on peut être surpris que les électeurs de "Arthaud", "Roussel", "Lassalle" ou "Poutou" ont majoritairement choisi "Le Pen" au deuxième tour, signifiant alors un rejet du candidat Macron. Les électeurs de Zemmour sont aussi partagés, signifiant un rejet des deux alternatives. Ce résultat est à prendre avec des pincettes car ces derniers candidats ont obtenu moins de votes et donc que le processus d'inférence est forcément moins précis car il y a moins de données disponibles.
En résumé, cette analyse donne des tendances en fonction des choix exprimés au premier tour: qui montre une nette séparation des groupes de vote.
tl;dr : Crowd-sourcing raw scores for your COSYNE reviewer feedback.
Following that message:
Dear community,
COSYNE is a great conference which plays a pivotal role in our field. If you have submitted an abstract (or several) you have recently received your scores. I am not affiliated to COSYNE - yet willing to contribute in some way: I would like to ask one minute of your time to report the raw scores from your reviewers. I will summarize in a few lines the results in one week time (11/02). The more numerous your feedbacks the better their precision!
Thanks!
As of 2022-02-20, I had received $N = 98$ answers from the google form (out of them, $95$ are valid) out of the $881$ submitted abstracts. In short, the result is that the total score $S$ is simply the linear sum of the scores $s_i$ given by each reviewer $i$ relatively weighted by the confidence levels $\pi_i$ (as stated in the email we received from the chairs):
$$ S = \frac{ \sum_i \pi_i\cdot s_i}{\sum_i \pi_i} $$Or if you prefer $$ S = \sum_i \frac{\pi_i}{\sum_j \pi_j} \cdot s_i $$
We deduce from that formula that the threshold is close to $6.34$ this year:
More details in the notebook (or directly in this post) which can also be forked here and interactively modified on binder.
EDIT: On 2022-02-20, I have updated the notebook to account for new answers, I have now received $N = 98$ answers (out of them, $95$ are valid), yet nothing changed qualitatively. On 2022-02-11, I had received $N = 82$ answers from the google form (out of them, $79$ are valid) and the estimated threshold wass close to $6.05$.
It's at the MIAM (Miam Musée International des Arts Modestes) in Sète, France, that I could for the first time experience really the Dreamachine. It's an optical system which consists of a central light which is periodically occluded by a rotating (cardboard?) cylinder.
The magic of it is that the frequency of occlusion is around $12$ Hz, an important resonant state for sensory system. For the first time, I could really try it out at the MIAM - the important point being to close your lids and rest quiet while looking at the stroboscopic light source. Surprisingly, you see the emergence of "psychedelic patterns" (of course, less than in hippie's movies) yet of the order of the color pattern that may arise in Benham's Disk.
It's difficult to reproduce this pattern on a screen, yet it is still possible to give an impression of it. The goal is here :
to generate a complex visual stimulation flickering on average at $12$ Hz
to project it on a retinotopic space to maximise the "psychelic" effect
In a previous notebook, I have shown how to fit a psychometric curve using pyTorch. Here, I would like to review some nice properies of the logistic regression model (WORK IN PROGRESS).
Hi! I am Jean-Nicolas Jérémie and the goal of this notebook is to provide a framework to implement (and experiment with) transfer learning on deep convolutional neuronal network (DCNN). In a nutshell, transfer learning allows to re-use the knowlegde learned on a problem, such as categorizing images from a large dataset, and apply it to a different (yet related) problem, performing the categorization on a smaller dataset. It is a powerful method as it allows to implement complex task de novo quite rapidly (in a few hours) without having to retrain the millions of parameters of a DCNN (which takes days of computations). The basic hypothesis is that it suffices to re-train the last classification layers (the head) while keeping the first layers fixed. Here, these networks teach us also some interesting insights into how living systems may perform such categorization tasks.
Based on our previous work, we will start from a VGG16 network loaded from the torchvision.models
library and pre-trained on the Imagenet dataset wich allows to perform label detection on naturals images for $K = 1000$ labels. Our goal here will be to re-train the last fully-Connected layer of the network to perfom the same task but in a sub-set of $K = 10$ labels from the Imagenet dataset.
Moreover, we are going to evaluate different strategies of transfer learning:
In this notebook, I will use the pyTorch library for running the networks and the pandas library to collect and display the results. This notebook was done during a master 2 internship at the Neurosciences Institute of Timone (INT) under the supervision of Laurent Perrinet. It is curated in the following github repo.
The goal here is to find one non-repeting pattern in an image. First, find the pattern, then second, compute correlation.
In a previous notebook, I have shown properties of the distribution of stars in the sky. Here, I would like to use the existing database of stars' positions and display them as a triangulation
Trying to model Ocean surface waves, I have played using a linear superposition of sinusoids and also on the fact that phase speed (following a wave's crest) is twice as fast as group speed (following a group of waves). Finally, I more recently used such generation of a model of the random superposition of waves to model the wigggling lines formed by the refraction (bendings of the trajectory of a ray at the interface between air and water) of light, called caustics...
Observing real-life ocean waves instructed me that if a single wave is well approximated by a sinusoïd, each wave is qualitatively a bit different. Typical for these surface gravity waves are the sharp crests and flat troughs. As a matter of fact, modelling ocean waves is on one side very useful (Ocean dynamics and its impact on climate, modelling tides, tsunamis, diffraction in a bay to predict coastline evolution, ...) but quite demanding despite a well known mathematical model. Starting with the Navier-Stokes equations to an incompressible fluid (water) in a gravitational field leads to Luke's variational principle in certain simplifying conditions. Further simplifications lead to the approximate solution given by Stokes which gives the following shape as the sum of different harmonics:
This seems well enough at the moment and I will capture this shape in this notebook and notably if that applies to a random mixture of such waves...
(WORK IN PROGRESS)
The current situation is that these solutions seem to not fit what is displayed on the wikipedia page and that I do not spot the bug I may have introduced... More work on this is needed and any feedback is welcome...
Looking at the night sky, the pattern of stars on the surface of the sky follows a familiar pattern. The Big Dipper, Cassiopeia, the Pleiades, or Orion are popular landmarks in the sky which we can immediatly recognize.
Different civilisations labelled these patterns using names such as constellations in the western world. However, this pattern is often the result of pure chance. Stars from one constellation belong often to remote areas in the universe and they bear this familiarity only because we always saw them as such; the rate at which stars move is much shorter than the lifespan of humanity.
I am curious here to study the density of stars as they appear on the surface of the sky. It is both a simple question yet a complex to formulate. Is there any generic principle that could be used to characterize their distribution? This is my attempt to answer the question
https://astronomy.stackexchange.com/questions/43147/density-of-stars-on-the-surface-of-the-sky
(but also to make the formulation of the question clearer...). This is my answer:
The goal here is to realize a time-lapse using a raspberry π and some python code and finally get this:
(This was done over 10 days, with almost each day an irregular session the morning and one in the evening)
The goal here is to re-implement the Kuramoto model, following a lecture from Joana Cabral and the code that is provided, but using python instead of matlab.
The goal here is to use the MotionClouds library to generate figures with second-order contours, similar to those used in the P. Roelfsema's group.
I propose here a simple method to fit experimental data common to epidemiological spreads, such as the present COVID-19 pandemic, using the inverse gaussian distribution. This follows the general incompregension of my answer to the question Is the COVID-19 pandemic curve a Gaussian curve? on StackOverflow. My initial point is to say that a Gaussian is not adapted as it handles a distribution on real numbers, while such a curve (the variable being number of days) handles numbers on the half line. Inspired by the excellent A Theory of Reaction Time Distributions by Dr Fermin Moscoso del Prado Martin a constructive approach is to propose another distribution, such as the inverse Gaussian distribution.
This notebook develops this same idea on real data and proves numerically how bad the Gaussian fit is compared to the latter. Thinking about the importance of doing a proper inference in such a case, I conclude
Hi! I am Jean-Nicolas Jérémie and the goal of this benchmark is to offer a comparison between differents pre-trained image recognition's networks based on the Imagenet dataset wich allows to work on naturals images for $1000$ labels. These different networks tested here are taken from the torchvision.models
library : AlexNet
, VGG16
, MobileNetV2
and ResNet101
.
Our use case is to measure the performance of a system which receives a sequence of images and has to make a decision as soon as possible, hence with batch_size=1
. Specifically, we wish also to compare different computing architectures such as CPUs, desktop GPUs or other more exotic platform such as the Jetson TX2 (experiment 1). Additionally, we will implement some image transformations as up/down-sampling (experiment 2) or transforming to grayscale (experiment 3) to quantify their influence on the accuracy and computation time of each network.
In this notebook, I will use the Pytorch library for running the networks and the pandas library to collect and display the results. This notebook was done during a master 1 internship at the Neurosciences Institute of Timone (INT) under the supervision of Laurent PERRINET. It is curated in the following github repo.
Jupyter notebooks are a great way of sharing knowledge in science, art, programming. For instance, in a recent musing, I tried to programmatically determine the color of the sky. This renders as a web page, but is also a piece of runnable code.
As such, they are also great ways to store the knowledge that was acquired at a given time and that could be reusable. This may be considered as bad programming and may have downsides as described in that slides :
Recently, thanks to an answer to a stack overflow question, I found a way to overcome this by detecting if the caall to a notebook is made from the notebook itself or from a parent.
Our sensorial environment contains multiple regularities which our brain uses to optimize its representation of the world: objects fall most of the time downwards, the nose is usually in the middle below the eyes, the sky is blue... Concerning this last point, I wish here to illustrate the physical origins of this phenomenon and in particular the range of colors that you may observe in the sky.
Caustics (wikipedia are luminous patterns which are resulting from the superposition of smoothly deviated light rays. It is for instance the heart-shaped pattern in your cup of coffee which is formed as the rays of the sun are reflected on the cup's surface. It is also the wiggly pattern of light curves that you will see on the floor of a pool as the sun's light is refracted at the surface of the water. Here, we simulate that particular physical phenomenon. Simply because they are mesmerizingly beautiful, but also as it is of interest in visual neuroscience. Indeed, it speaks to how images are formed (more on this later), hence how the brain may understand images.
In this post, I will develop a simple formalism to generate such patterns, with the paradoxical result that it is very simple to code yet generates patterns with great complexity, such as:
This is joint work with artist Etienne Rey, in which I especially follow the ideas put forward in the series Turbulence.
A quick note as to how to create an hexagonal grid.
The goal here is to compare methods which fit data with psychometric curves using logistic regression. Indeed, after (long) experiments where for instance you collected sequences of keypresses, it is important to infer at best the parameters of the underlying processes: was the observer biased, or more precise?
While I was forevever using sklearn or lmfit (that is, scipy's minimize) and praised these beautifully crafted methods, I sometimes lacked some flexibility in the definition of the model. This notebook was done in collaboration with Jenna Fradin, master student in the lab.
tl; dr = Do not trust the coefficients extracted by a fit without validating for methodological biases.
One part of flexibility I missed is taking care of the lapse rate, that is the frequency with which you just miss the key. In a psychology experiment, you often see a fast sequence of trials for which you have to make a perceptual deccision, for instance press the Left or Right arrows. Sometimes you know the answer you should have done, but press the wrong eror. This error of distraction is always low (in the order of 5% to 10%) but could potentially change the results of the experiments. This is one of the aspects we will evaluate here.
In this notebook, I define a fitting method using pytorch which fits in a few lines of code :
import torch
from torch.utils.data import TensorDataset, DataLoader
torch.set_default_tensor_type("torch.DoubleTensor")
criterion = torch.nn.BCELoss(reduction="sum")
class LogisticRegressionModel(torch.nn.Module):
def __init__(self, bias=True, logit0=-2, theta0=0, log_wt=torch.log(0.1*torch.ones(1))):
super(LogisticRegressionModel, self).__init__()
#self.linear = torch.nn.Linear(1, 1, bias=bias)
self.theta0 = torch.nn.Parameter(theta0 * torch.ones(1))
self.logit0 = torch.nn.Parameter(logit0 * torch.ones(1))
self.log_wt = torch.nn.Parameter(log_wt * torch.ones(1))
def forward(self, theta):
p0 = torch.sigmoid(self.logit0)
#out = p0 / 2 + (1 - p0) * torch.sigmoid(self.linear(theta))
out = p0 / 2 + (1 - p0) * torch.sigmoid((theta-self.theta0 )/torch.exp(self.log_wt))
return out
learning_rate = 0.005
beta1, beta2 = 0.9, 0.999
betas = (beta1, beta2)
num_epochs = 2 ** 9 + 1
batch_size = 256
amsgrad = False # gives similar results
amsgrad = True # gives similar results
def fit_data(
theta,
y,
learning_rate=learning_rate,
batch_size=batch_size, # gamma=gamma,
num_epochs=num_epochs,
betas=betas,
verbose=False, **kwargs
):
Theta, labels = torch.Tensor(theta[:, None]), torch.Tensor(y[:, None])
loader = DataLoader(
TensorDataset(Theta, labels), batch_size=batch_size, shuffle=True
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
logistic_model = LogisticRegressionModel()
logistic_model = logistic_model.to(device)
logistic_model.train()
optimizer = torch.optim.Adam(
logistic_model.parameters(), lr=learning_rate, betas=betas, amsgrad=amsgrad
)
for epoch in range(int(num_epochs)):
logistic_model.train()
losses = []
for Theta_, labels_ in loader:
Theta_, labels_ = Theta_.to(device), labels_.to(device)
outputs = logistic_model(Theta_)
loss = criterion(outputs, labels_)
optimizer.zero_grad()
loss.backward()
optimizer.step()
losses.append(loss.item())
if verbose and (epoch % (num_epochs // 32) == 0):
print(f"Iteration: {epoch} - Loss: {np.sum(losses)/len(theta):.5f}")
logistic_model.eval()
Theta, labels = torch.Tensor(theta[:, None]), torch.Tensor(y[:, None])
outputs = logistic_model(Theta)
loss = criterion(outputs, labels).item() / len(theta)
return logistic_model, loss
and run a series of tests to compare both methods.
Motion Clouds were defined in the origin to define parameterized moving textures. In that other post, we defined a simple code to generate static images using a simple code. Can we generate a series of images while changing the phase globally?
I have previously shown a python implementation which allows for the extraction a sparse set of edges from an image. We were using the raw luminance as the input to the algorithm. What happens if you use gamma correction?
This may not be the case for other types of images which would justify an image-by-image local gain control.
more information on sparse coding is exposed in the following book chapter (see also https://laurentperrinet.github.io/publication/perrinet-15-bicv ):
@inbook{Perrinet15bicv,
title = {Sparse models},
author = {Perrinet, Laurent U.},
booktitle = {Biologically-inspired Computer Vision},
chapter = {13},
editor = {Keil, Matthias and Crist\'{o}bal, Gabriel and Perrinet, Laurent U.},
publisher = {Wiley, New-York},
year = {2015}
}
You may have a bunch of files that you want to convert from one format to another: images, videos, music, text, ... How do you convert them while using ZSH as your shell language in a single line?
I will take the example of music files which I wish totransform from FLAC to OPUS.
This video shows the results of unsupervised learning with different type of kezrnel normalization. This is to illustrate the results obtained in this paper on the An Adaptive Homeostatic Algorithm for the Unsupervised Learning of Visual Features which is now in press.
Le but ici de cette première tache est de créer un "raster plot" qui montre la reproducibilité d'un train de spike avec des répétitions du même stimulus. En particulier, nous allons essayer de répliquer la figure 1 de Mainen & Sejnowski (1995).
Ce notebook a été élaboré lors d'un TP dans le cadre du Master 1 de sciences cognitives de l'Université d'Aix-Marseille.
This video shows different MotionClouds with different complexities, from a crystal-like Grating to textures with an incresing span of spatial frequencies (resp "Mc Narrow" and "MC Broad"). This is to illustrate the different stimuli used in this paper on the chracterization of speed-selectivity in the retina available @ https://www.nature.com/articles/s41598-018-36861-8 .
The goal here is to check if the Von Mises distribution is the a priori choice to make when handling polar coordinates.
In this post we would try to show how one could infer the sparse representation of an image knowing an appropriate generative model for its synthesis. We will start by a linear inversion (pseudo-inverse deconvolution), and then move to a gradient descent algorithm. Finally, we will implement a convolutional version of the iterative shrinkage thresholding algorithm (ISTA) and its fast version, the FISTA.
For computational efficiency, all convolutions will be implemented by a Fast Fourier Tansform, so that a standard convolution will be mathematically exactly similar. We will benchmark this on a realistic image size of $512 \times 512$ giving some timing results on a standard laptop.
As we see a visual scene, there is contribution of the motion of each of the objects that constitute the visual scene into detecting its global motion. In particular, it is debatable to know which weight individual features, such as small objects in the foreground, have into this computation compared to a dense texture-like stimulus, as that of the background for instance.
Here, we design a a stimulus where we control independently these two aspects of motions to titrate their relative contribution to the detection of motion.
Can you spot the motion ? Is it more going to the upper left or to the upper right?
(For a more controlled test, imagine you fixate on the center of the movie.)
Motion detection has many functions, one of which is the potential localization of the biomimetic camouflage seen in this video. Can we test this in the lab by using synthetic textures such as MotionClouds?
However, by construction, MotionClouds have no spatial structure and it seems interesting to consider more complex trajectories. Following a previous post, we design a trajectory embedded in noise.
Can you spot the motion ? from left to right or reversed ?
(For a more controlled test, imagine you fixate on the top left corner of each movie.)
(Upper row is coherent, lower row uncoherent / Left is ->
Right column is <-
)
When I see a rainbow, I perceive the luminance inside the arc to be brighter than outside the arc. Is this effect percpetual (inside our head) or physical (inside each droplet in the sky). So, this is a simple notebook to show off how to synthesize the image of a rainbow on a realistic sky. TL;DR: there must be a physical reason for it.
Outline: The rainbow is a set of colors over a gradient of hues, masked for certain ones. The sky will be a gradient over blueish colors.
What does the input to a population of neurons in the primary visual cortex look like? In this post, we will try to have a feeling of the structure and statistics of the natural input to such a "ring" model.
This notebook explores this question using a retina-like temporal filtering and oriented Gabor-like filters. It produces this polar plot of the instantaneous energy in the different orientations for a natural movie :
One observes different striking features in the structure of this input to populations of V1 neurons:
This structure is specific to the structure of natural images and to the way they transform (translations, rotations, zooms due to the motion and deformation of visual objects). This is certainly incorporated as a "prior" information in the structure of the visual cortex. As to know how and where this is implemented is an open scientific question.
This is joint work with Hugo Ladret.
PyTorch is a great library for machine learning. You can in a few lines of codes retrieve a dataset, define your model, add a cost function and then train your model. It's quite magic to copy and paste code from the internet and get the LeNet network working in a few seconds to achieve more than 98% accuracy.
However, it can be tedious sometimes to extend existing objects and here, I will manipulate some ways to define the right dataset for your application. In particular I will modify the call to a standard dataset (MNIST) to place the characters at random places in a large image.
When creating large simulations, you may sometimes create unique identifiers for each of it. This is useful to cache intermediate results for instance. This is the main function of hashes. We will here create a simple one-liner function to generate one.
MotionClouds may be considered as a control stimulus - it seems more interesting to consider more complex trajectories.
In some recent modeling work:
Laurent Perrinet, Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012 https://laurentperrinet.github.io/publication/perrinet-12-pred
we study the role of transport in modifying our perception of motion. Here, we test what happens when we change the amount of noise in the stimulus.
In this script the predictive coding is done using the MotionParticles
package and for a motion texture within a disk aperture.
I am experimenting with the pupil eyetracker and could set it up (almost) smoothly on a macOS. There is an excellent documentation, and my first goal was to just record raw data and extract eye position.
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="https://laurentperrinet.github.io/sciblog/files/2017-12-13_pupil%20test_480.mp4" width=61.8%/></center>')
This video shows the world view (cranio-centric, from a head-mounted camera fixed on the frame) with overlaid the position of the (right) eye while I am configuring a text box. You see the eye fixating on the screen then jumping somewhere else on the screen (saccades) or on the keyboard / hands. Note that the screen itself shows the world view, such that this generates an self-reccurrent pattern.
For this, I could use the capture script and I will demonstrate here how to extract the raw data in a few lines of python code.
%load_ext autoreload
%autoreload 2
Natural images follow statistics inherited by the structure of our physical (visual) environment. In particular, a prominent facet of this structure is that images can be described by a relatively sparse number of features. To investigate the role of this sparseness in the efficiency of the neural code, we designed a new class of random textured stimuli with a controlled sparseness value inspired by measurements of natural images. Then, we tested the impact of this sparseness parameter on the firing pattern observed in a population of retinal ganglion cells recorded ex vivo in the retina of a rodent, the Octodon degus. These recordings showed in particular that the reliability of spike timings varies with respect to the sparseness with globally a similar trend than the distribution of sparseness statistics observed in natural images. These results suggest that the code represented in the spike pattern of ganglion cells may adapt to this aspect of the statistics of natural images.
In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ). Compared to other posts, such as this previous post, we improve the code to not depend on any parameter (namely the C
parameter of the rescaling function). For that, we will use a non-parametric approach based on the use of cumulative histograms.
This is joint work with Victor Boutin and Angelo Francisioni. See also the other posts on unsupervised learning.
To code image as edges, for instance in the SparseEdges
sparse coding scheme, we use a model of edges in images. A good model for these edges are bidimensional Log Gabor filter. This is implemented for instance in the LogGabor
library. The library was designed to be precise, but not particularly for efficiency. In order to improve its speed, we demonstrate here the use of a cache to avoid redundant computations.
Convolutions are essential components of any neural networks, image processing, computer vision ... but these are also a bottleneck in terms of computations... I will here benchmark different solutions using numpy
, scipy
or pytorch
. This is work-in-progress, so that any suggestion is welcome, for instance on StackExchange or in the comments below this post.
In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ). Compared to the previous post, we integrated the faster code to https://github.com/bicv/SHL_scripts.
See also the other posts on unsupervised learning,
This is joint work with Victor Boutin.
In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ). Compared to the previous post, we optimize the code to be faster.
See also the other posts on unsupervised learning,
This is joint work with Victor Boutin.
In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ). In particular, we will show how one can build the non-linear functions based on the activity of each filter and which implement homeostasis.
See also the other posts on unsupervised learning,
This is joint work with Victor Boutin.
Since the beginning, we have used a definition of bandwidth in the spatial frequency domain which was quite standard (see supp material for instance):
$$ \mathcal{E}(f; sf_0, B_{sf}) \propto \frac {1}{f} \cdot \exp \left( -.5 \frac {\log( \frac{f}{sf_0} ) ^2} {\log( 1 + \frac {B_sf}{sf_0} )^2 } \right) $$This is implemented in the folowing code which reads:
env = 1./f_radius*np.exp(-.5*(np.log(f_radius/sf_0)**2)/(np.log((sf_0+B_sf)/sf_0)**2))
However the one implemented in the code looks different (thanks to Kiana for spotting this!), so that one can think that the code is using:
$$ \mathcal{E}(f; sf_0, B_{sf}) \propto \frac {1}{f} \cdot \exp \left( -.5 \frac {\log( \frac{f}{sf_0} ) ^2} {\log(( 1 + \frac {B_sf}{sf_0} )^2 ) } \right) $$The difference is minimal, yet very important for a correct definition of the bandwidth!
An essential dimension of motion is speed. However, this notion is prone to confusions as the speed that has to be measured can be relative to different object. Is it the speed of pixels? The speed of visual objects? We try to distinguish the latter two in this post.
from IPython.display import Image
Image('http://www.rhsmpsychology.com/images/monocular_IV.jpg')
In a previous notebook, we tried to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse. In particular, we saw that in order to optimize competition, it is important to control cooperation and we implemented a heuristic to just do this.
In this notebook, we provide an extension to the SparseNet algorithm. We will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ):
@article{Perrinet10shl,
Title = {Role of homeostasis in learning sparse representations},
Author = {Perrinet, Laurent U.},
Journal = {Neural Computation},
Year = {2010},
Doi = {10.1162/neco.2010.05-08-795},
Keywords = {Neural population coding, Unsupervised learning, Statistics of natural images, Simple cell receptive fields, Sparse Hebbian Learning, Adaptive Matching Pursuit, Cooperative Homeostasis, Competition-Optimized Matching Pursuit},
Month = {July},
Number = {7},
Url = {https://laurentperrinet.github.io/publication/perrinet-10-shl},
Volume = {22},
}
This is joint work with Victor Boutin.
In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robustness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:
alpha_homeo
has to be properly set to achieve a good convergence.See also :
This is joint work with Victor Boutin.
In a previous notebook, we tried to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.
However, the dictionaries are qualitatively not the same as the one from the original paper, and this is certainly due to the lack of control in the competition during the learning phase.
Herein, we re-implement the cooperation mechanism in the dictionary learning routine - this will be then proposed to the main code.
This is joint work with Victor Boutin.
This notebook tries to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.
the underlying machinery uses a similar dictionary learning as used in the image denoising example from sklearn
and our aim here is to show that a novel ingredient is necessary to reproduce Olshausen's results.
All these code bits is regrouped in the SHL scripts repository (where you will also find some older matlab code). You may install it using
pip install git+https://github.com/bicv/SHL_scripts
This is joint work with Victor Boutin.
In the context of a course in Computational Neuroscience, I am teaching a basic introduction in Probabilities, Bayes and the Free-energy principle.
Let's learn to use probabilities in practice by generating some "synthetic data", that is by using the computer's number generator. 2017-03-13_NeuroComp_FEP
I enjoyed reading "A tutorial on the free-energy framework for modelling perception and learning" by Rafal Bogacz, which is freely available here. In particular, the author encourages to replicate the results in the paper. He is himself giving solutions in matlab, so I had to do the same in python all within a notebook...
A set of bash code to resize images to a fixed size.
Problem statement: we have a set of images with heterogeneous sizes and we want to homogenize the database to avoid problems when classifying them. Solution: ImageMagick.
We first identify the size and type of images in the database. The database is a collection of folders containing each a collection of files. We thus do a nested recursive loop:
Let's explore generators and the yield
statement in the python language...
Sometimes, you need to pick up the $N$-th extremal values in a mutli-dimensional matrix.
Let's suppose it is represented as a nd-array
(here, I further suppose you are using the numpy library from the python language). Finding extremal values is easy with argmax
, argmin
or argsort
but this function operated on 1d vectors... Juggling around indices is sometimes not such an easy task, but luckily, we have the unravel_index
function.
For those in a hurry, one quick application is that given an np.ndarray
, it's eeasy to get the index of the maximal value in that array :
import numpy as np
x = np.arange(2*3*4).reshape((4, 3, 2))
x = np.random.permutation(np.arange(2*3*4)).reshape((4, 3, 2))
print ('input ndarray', x)
idx = np.unravel_index(np.argmax(x.ravel()), x.shape)
print ('index of maximal value = ', idx, ' and we verify that ', x[idx], '=', x.max())
Let's unwrap how we found such an easy solution...
It is insanely useful to create movies to illustrate a talk, blog post or just to include in a notebook:
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="../files/2016-11-15_noise.mp4" width=61.8%/></center>')
For years I have used a custom made solution made around saving single frames and then calling ffmpeg
to save that files to a movie file. That function (called anim_save
had to be maintained accross different libraries to reflect new needs (going to WEBM and MP4 formats for instance). That made the code longer than necessary and had not its place in a scientific library.
Here, I show how to use the animation
library from matplotlib to replace that
A lively community of people including students, researchers and tinkerers from Marseille (France) celebrate the so-called "π-day" on the 3rd month, 14th day of each year. A nice occasion for general talks on mathematics and society in a lively athmosphere and of course to ... a pie contest!
I participated last year (in 2016) with a pie called "Monte Carlo". Herein, I give the recipe by giving some clues on its design... This page is a notebook - meaning that you can download it and re-run the analysis I do here at home (and most importantly comment or modify it and correct potential bugs...).
Une active communauté d'étudiants, chercheurs et bidouilleurs célèbrent à Marseille la "journée π" le 3ème mois, 14ème jour de chaque année. Une occasion de rêve pour en apprendre plus sur les mathématiques et la science dans une ambiance conviviale... Mais c'est aussi l'occasion d'un concours de tartes!
J'ai eu l'opportunité d'y participer l'an dernier (soit pour l'édition 2016) avec une tarte appelée "Monte Carlo". Je vais donner ici la "recette" de ma tarte, le lien avec le nombre π et quelques digressions mathématiques (notament par rapport à la présence incongrue d'un éléphant mais aussi par rapport à la démarche scientifique)... Cette page est un "notebook" - vous pouvez donc la télécharger et relancer les analyses et figures (en utilisant python + jupyter). C'est aussi un travail non figé - prière de me suggérer des corrections!
After reading the paper Motion Direction Biases and Decoding in Human Visual Cortex by Helena X. Wang, Elisha P. Merriam, Jeremy Freeman, and David J. Heeger (The Journal of Neuroscience, 10 September 2014, 34(37): 12601-12615; doi: 10.1523/JNEUROSCI.1034-14.2014), I was interested to test the hypothesis they raise in the discussion section :
The aperture-inward bias in V1–V3 may reflect spatial interactions between visual motion signals along the path of motion (Raemaekers et al., 2009; Schellekens et al., 2013). Neural responses might have been suppressed when the stimulus could be predicted from the responses of neighboring neurons nearer the location of motion origin, a form of predictive coding (Rao and Ballard, 1999; Lee and Mumford, 2003). Under this hypothesis, spatial interactions between neurons depend on both stimulus motion direction and the neuron's relative RF locations, but the neurons themselves need not be direction selective. Perhaps consistent with this hypothesis, psychophysical sensitivity is enhanced at locations further along the path of motion than at motion origin (van Doorn and Koenderink, 1984; Verghese et al., 1999).
Concerning the origins of aperture-inward bias, I want to test an alternative possibility. In some recent modeling work:
Laurent Perrinet, Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012 https://laurentperrinet.github.io/publication/perrinet-12-pred
I was surprised to observe a similar behavior: the trailing edge was exhibiting a stronger activation (i. e. higher precision revealed by a lower variance in this probabilistic model) while I would have thought intuitively the leading edge would be more informative. In retrospect, it made sense in a motion-based prediction algorithm as information from the leading edge may propagate in more directions (135° for a 45° bar) than in the trailing edge (45°, that is a factor of 3 here). While we made this prediction we did not have any evidence for it.
In this script the predictive coding is done using the MotionParticles
package and for a motion texture within a disk aperture.
Motion Clouds were originally defined as moving textures controlled by a few parameters. The library is also capable to generate a static spatial texture. Herein I describe a solution to generate a single static frame.
For a master's project in computational neuroscience, we adopted a quite novel workflow to go all the steps from the learning of the small steps to the wrtiting of the final thesis. Though we were flexible in our method during the 6 months of this work, a simple workflow emerged that I describe here.
Motion Clouds were defined in the origin to provide a simple parameterization for textures. Thus we used a simple unimodal, normal distribution (on the log-radial frequency space to be more precise). But the larger set of Random Phase Textures may provide some interesting examples, some of them can even be fun! This is the case of this simulation of the waves you may observe on the surface on the ocean.
Main features of gravitational waves are:
More info about deep water waves : http://farside.ph.utexas.edu/teaching/336L/Fluidhtml/node122.html
A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.
An essential component of natural images is that they may contain order at large scales such as symmetries. Let's try to generate some textures with an axial (arbitrarily vertical) symmetry and take advantage of the different properties of random phase textures.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
On va maintenant utiliser des forces elastiques pour coordonner la dynamique des lames dans la trame.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
.. media:: http://vimeo.com/150813922
Ce meta-post gère la publication sur https://laurentperrinet.github.io/sciblog.
This is an old blog post, see the newer version in this post
This is an old blog post, see the newer version in this post and following.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post produit le montage final des séquences.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post crée des ondes se propageant sur la série de lames.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post implémente une configuration favorisant des angles droits.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post implémente une configuration implémentant une vague de propagation.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post implémente une dynamique sur le point focal.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post explore les paramètres de la structure et de l'extension des reflections.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post étudie quelques principes fondamentaux relatifs à des reflections mutliples dans des mirroirs.
Testing some multi-threading libraries
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post étudie commment sampler des points sur la structure.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post étudie une reaction de reaction diffusion sur la structure.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post étudie la connection entre le raspberry π (qui fait tourner les simulations) et les arduino (qui commandent les moteurs).
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post simule une configuration de contrôle sur l'ensemble de la structure.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post simule une configuration de type Fresnel sur l'ensemble de la structure.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post simule un rendu de reflection utilisant povray.
A scene with mirrors rendered with vapory (see https://laurentperrinet.github.io/sciblog/posts/2015-01-16-rendering-3d-scenes-in-python.html )
A smooth transition of MotionClouds while smoothly changing their parameters.
The Matching Pursuit algorithm is popular in signal processing and applies well to digital images.
I have contributed a python implementation and we will show here how we may use that for extracting a sparse set of edges from an image.
@inbook{Perrinet15bicv,
title = {Sparse models},
author = {Perrinet, Laurent U.},
booktitle = {Biologically-inspired Computer Vision},
chapter = {13},
editor = {Keil, Matthias and Crist\'{o}bal, Gabriel and Perrinet, Laurent U.},
publisher = {Wiley, New-York},
year = {2015}
}
When processing images, it is useful to avoid artifacts, in particular when you try to understand biological processes. In the past, I have used natural images (found on internet, grabbed from holiday pictures, ...) without controlling for possible problems.
In particular, digital pictures are taken on pixels which are most often placed on a rectangular grid. It means that if you rotate that image, you may lose information and distort it and thus get wrong results (even for the right algorithm!). Moreover, pictures have a border while natural scenes do not, unless you are looking at it through an aperture. Intuitively, this means that large objects would not fit on the screen and are less informative.
In computer vision, it is easier to handle these problems in Fourier space. There, an image (that we suppose square for simplicity) is transformed in a matrix of coefficients of the same size as the image. If you rotate the image, the Fourier spectrum is also rotated. But as you rotate the image, the information that was in the corners of the original spectrum may span outside the spectrum of the rotated image. Also, the information in the center of the spectrum (around low frequencies) is less relevant than the rest.
Here, we will try to keep as much information about the image as possible, while removing the artifacts related to the process of digitalizing the picture.
This is an old blog post, see the newer version in this post
This is an old blog post, see the newer version in this post
This is an old blog post, see the newer version in this post
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post utilise une image naturelle comme entrée d'une "trame sensorielle".
A long standing dependency of MotionClouds is MayaVi. While powerful, it is tedious to compile and may discourage new users. We are trying here to show some attempts to do the same with the vispy library.
In another post, we tried to use matplotlib, but this had some limitations. Let's now try vispy, a scientific visualisation library in python using opengl.
I was recently trying to embed a clip in a jupyter notebook using MoviePy, somthing that was working smoothly with python 2.7. After switching to python 3, this was not working anymore and left me scratcthing my head for a solution.
We live in a open-sourced world, so I filled an issue:
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
On va maintenant utiliser des forces elastiques pour coordonner la dynamique des lames dans la trame.
I have heard Vancouver can get foggy and cloudy in the winter. Here, I will provide some examples of realistic simulations of it...
This stimulation was used in the following poster presented at VSS:
@article{Kreyenmeier2016,
author = {Kreyenmeier, Philipp and Fooken, Jolande and Spering, Miriam},
doi = {10.1167/16.12.457},
issn = {1534-7362},
journal = {Journal of Vision},
month = {sep},
number = {12},
pages = {457},
publisher = {The Association for Research in Vision and Ophthalmology},
title = {{Similar effects of visual context dynamics on eye and hand movements}},
url = {http://jov.arvojournals.org/article.aspx?doi=10.1167/16.12.457},
volume = {16},
year = {2016}
}
Dans un notebook précédent, on a vu comment créer une grille hexagonale et comment l'animer.
On va maintenant utiliser MoviePy pour animer ces plots.
Script done in collaboration with Jean Spezia.
Script done in collaboration with Jean Spezia.
Gizeh (that is, Cairo for tourists) is a great interface to the Cairo drawing library.
I recently wished to make a small animation of a bar moving in the visual field and crossing a simple receptive field to illustrate some simple motions that could be captured in the primary visual cortex ansd experiments that could be done there.
TIKZ is a great language for producing vector graphics. It is however a bit tedious to go over the whole $\LaTeX$-like compilation when you get used to an ipython notebooks work-flow.
I describe here how to use a cell magic implemented by http://www2.ipp.mpg.de/~mkraus/python/tikzmagic.py and a hack to use euclide within the graph (as implemented in https://github.com/laurentperrinet/ipython_magics).
The above snippet shows how you can create a 3D rendered scene in a few lines of codes (from http://zulko.github.io/blog/2014/11/13/things-you-can-do-with-python-and-pov-ray/):
import vapory
camera = vapory.Camera( 'location', [0, 2, -3], 'look_at', [0, 1, 2] )
light = vapory.LightSource( [2, 4, -3], 'color', [1, 1, 1] )
sphere = vapory.Sphere( [0, 1, 2], 2, vapory.Texture( vapory.Pigment( 'color', [1, 0, 1] )))
scene = vapory.Scene(camera = camera , # a Camera object
objects = [light, sphere], # POV-Ray objects (items, lights)
included = ["colors.inc"]) # headers that POV-Ray may need
# passing 'ipython' as argument at the end of an IPython Notebook cell
# will display the picture in the IPython notebook.
scene.render('ipython', width=900, height=500)
Here are more details...
Following this post http://carreau.github.io/posts/10-No-PyLab-Thanks.ipynb.html, here is ---all in one single cell--- the bits necessary to import most useful libraries in an ipython notebook:
# import numpy and set the printed precision to something humans can read
import numpy as np
np.set_printoptions(precision=2, suppress=True)
# set some prefs for matplotlib
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({'text.usetex': True})
fig_width_pt = 700. # Get this from LaTeX using \showthe\columnwidth
inches_per_pt = 1.0/72.27 # Convert pt to inches
fig_width = fig_width_pt*inches_per_pt # width in inches
FORMATS = ['pdf', 'eps']
phi = .5*np.sqrt(5) + .5 # useful ratio for figures
# define plots to be inserted interactively
%matplotlib inline
#%config InlineBackend.figure_format='retina' # high-def PNGs, quite bad when using file versioning
%config InlineBackend.figure_format='svg'
Below, I detail some thoughts on why it is a perfect preamble for most ipython notebooks.
A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.
This is part of a larger study to tune orientation bandwidth.
I needed to show prior information for the orientation of contours in natural images showing a preference for cardinal axis. A polar plot showing seemed to be a natural choice for showing the probability distribution function. However, this seems visually flawed...
It's easy to record tracks (for instance while running) using the "My tracks" app on android systems. What about being able to re-use them?
In :doc:2014-11-22-reading-kml-my-tracks-files-in-ipython
we reviewed different approches. Let's now try to use this data.
It's easy to record tracks (for instance while running) using the "My tracks" app on android systems. What about being able to re-use them?
Here, I am reviewing some methods to read a KML file, including fastkml, pykml, to finally opt for a custom method with the xmltodict package.
We test Reverse-phi motion and the Asymmetry of ON and OFF responses using MotionClouds.
A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the orientation mean and bandwidth.
A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.
To install the necessary libraries, check out the documentation.
For the biphoton experiment:
Motion Clouds were defined in the origin to provide a simple parameterization for textures. Thus we used a simple unimodal, normal distribution (on the log-radial frequency space to be more precise). But the larger set of Random Phase Textures may provide some interesting examples, some of them can even be fun! This is the case of this simulation of the waves you may observe on the surface on the ocean.
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="../files/2014-10-24_waves/waves.mp4" width=61.8%/></center>')
Main features of gravitational waves are:
More info about deep water waves : http://farside.ph.utexas.edu/teaching/336L/Fluidhtml/node122.html
In this notebook, we are interested in replicating the results from Dong and Attick (1995).
TODO!
import numpy as np
import MotionClouds as mc
downscale = 2
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame/downscale)
name = 'MotionPlaid'
mc.figpath = '../files/2014-10-20_MotionPlaids'
import os
if not(os.path.isdir(mc.figpath)): os.mkdir(mc.figpath)
Plaids are usually created by adding two moving gratings (the components) to form a new simulus (the pattern). Such stimuli are crucial to understand how for instance information about motion is represented and processed and its neural mechanisms have been extensively studied notably by Anthony Movshon at the NYU. One shortcomming is the fact that these are created by components which created interference patterns when they are added (as in a Moiré pattern). The question remains to know if everything that we know about components vs pattern processing comes from these interference patterns or reaaly by the processing of the two componenents as a whole. As such, Motion Clouds are ideal candidates because they are generated with random phases: by construction, there should not be any localized interference between components. Let's verfy that:
Defintion of parameters:
theta1, theta2, B_theta = np.pi/4., -np.pi/4., np.pi/32
This figure shows how one can create MotionCloud stimuli that specifically target component and pattern cell. We show in the different lines of this table respectively: Top) one motion cloud component (with a strong selectivity toward the orientation perpendicular to direction) heading in the upper diagonal Middle) a similar motion cloud component following the lower diagonal Bottom) the addition of both components: perceptually, the horizontal direction is predominant.
print( mc.in_show_video.__doc__)
Component one:
diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta1, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_theta=B_theta)
name_ = name + '_comp1'
mc.figures(diag1, name_, seed=12234565, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
Component two:
diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta2), V_Y=np.sin(theta2), B_theta=B_theta)
name_ = name + '_comp2'
mc.figures(diag2, name_, seed=12234565, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
The pattern is the sum of the two components:
name_ = name
mc.figures(diag1 + diag2, name, seed=12234565, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
Script done in collaboration with Jean Spezia.
import os
import numpy as np
import MotionClouds as mc
name = 'MotionPlaid_9x9'
if mc.check_if_anim_exist(name):
B_theta = np.pi/32
N_orient = 9
downscale = 4
fx, fy, ft = mc.get_grids(mc.N_X//downscale, mc.N_Y//downscale, mc.N_frame//downscale)
mov = mc.np.zeros(((N_orient*mc.N_X//downscale), (N_orient*mc.N_X//downscale), mc.N_frame//downscale))
i = 0
j = 0
for theta1 in np.linspace(0, np.pi/2, N_orient)[::-1]:
for theta2 in np.linspace(0, np.pi/2, N_orient):
diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta1, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_theta=B_theta)
diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta2), V_Y=np.sin(theta2), B_theta=B_theta)
mov[(i)*mc.N_X//downscale:(i+1)*mc.N_X//downscale, (j)*mc.N_Y//downscale : (j+1)*mc.N_Y//downscale, :] = mc.random_cloud(diag1 + diag2, seed=1234)
j += 1
j = 0
i += 1
mc.anim_save(mc.rectif(mov, contrast=.99), os.path.join(mc.figpath, name))
mc.in_show_video(name, figpath=mc.figpath)
As in (Rust, 06) we show in this table the concatenation of a table of 9x9 MotionPlaids where the angle of the components vary on respectively the horizontal and vertical axes.
The diagonal from the bottom left to the top right corners show the addition of two component MotionClouds of similar direction: They are therefore also intance of the same Motion Clouds and thus consist in a single component.
As one gets further away from this diagonal, the angle between both component increases, as can be seen in the figure below. Note that first and last column are different instance of similar MotionClouds, just as first and last lines in in the table.
N_orient = 8
downscale = 2
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame)
theta = 0
for dtheta in np.linspace(0, np.pi/2, N_orient):
name_ = name + 'dtheta_' + str(dtheta).replace('.', '_')
diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta + dtheta, V_X=np.cos(theta + dtheta), V_Y=np.sin(theta + dtheta), B_theta=B_theta)
diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta - dtheta, V_X=np.cos(theta - dtheta), V_Y=np.sin(theta - dtheta), B_theta=B_theta)
mc.figures(diag1 + diag2, name_, seed=12234565, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
For clarity, we display MotionPlaids as the angle between both component increases from 0 to pi/2.
Left column displays iso-surfaces of the spectral envelope by displaying enclosing volumes at 5 different energy values with respect to the peak amplitude of the Fourier spectrum.
Right column of the table displays the actual movie as an animation.
An easy way to include movie in a notebook using Holoviews.
When studying a multi-dimensional random variable, if these are guassian the norm if the vector follows a $\chi$ distribution (see http://en.m.wikipedia.org/wiki/Chi_distribution).
Trying to answer the question in http://stackoverflow.com/questions/12233105/how-can-i-display-an-image-in-the-terminal/22537549?noredirect=1#comment41404352_22537549 :
Is there any sort of utility I can use to convert an image to ASCII and then print it in my terminal? I looked for one but couldn't seem to find any.
Following http://nbviewer.ipython.org/github/ipython/ipython/blob/1.x/examples/notebooks/Part%205%20-%20Rich%20Display%20System.ipynb and http://nbviewer.ipython.org/gist/fonnesbeck/ad091b81bffda28fd657 , let's try to integrate an interactive D3 plot within a notebook.
A long standing dependency of MotionClouds is MayaVi. While powerful, it is tedious to compile and may discourage new users. We are trying here to show some attempts to do the same with matplotlib or any other library.
I have recently been asked how to transfer lots of file from a backup server to a local disk. The context is that
Dans le notebook précédent, on a vu comment créer
On va maintenant utiliser:
http://matplotlib.org/api/animation_api.html
http://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/
... pour créer des animations de ces lames.
Nous allons simuler un ring avec en entrée une courbe représentant des stimuli "naturels"
La densité est définie sur $[0, 2\pi[$ par
$ f(x) = \frac {e^{\kappa cos(\theta - m)}}{2 \pi I_{0}(\kappa)} ~$
avec $~ \kappa = \frac {1}{\sigma^{2}} $
L'orientation est définie sur $[0, \pi[$ par un remplaçant de la variable $~ (\theta_d - m) ~$ par $~ 2(\theta_o - m)$
nous obtenons alors la densité pour $\theta_o$: $ f(x) = \frac{1}{2 \pi I_{0}(\kappa)} \cdot {e^{\frac{cos(2(\theta - m))}{\sigma^{2}}}}$ et $~ p = \frac {1} {2\pi \sigma_1 \sigma_2} \cdot e^{- 2 \frac{(m_2-m_1)^{2}}{\sigma_{1}^{2} + \sigma_{2}^{2}}}$
car avec le changement de variable : $ \varepsilon(x) = \frac {1} {\sigma_{1} \sqrt{2\pi}} \cdot e^{\frac {-(2(x-m_{1}))^{2}} {2\sigma_{1}^{2}}} $ et $\gamma (x) = \frac {1} {\sigma_2 \sqrt{2\pi}} \cdot e^{\frac {-(2(x-m_2))^{2}} {2\sigma_{2}^{2}}} $
donc $\varepsilon(x) \cdot \gamma(x) = \frac {1}{2\pi \sigma_{1} \sigma_{2}} \cdot e^{-2(\frac {(x-m_{1})^{2}}{\sigma_{1}^{2}} + \frac{(x-m_{2})^{2}}{\sigma_{2}^{2}})} = \frac {1} {2\pi \sigma_1 \sigma_2} \cdot e^{- 2 \frac{(m_2-m_1)^{2}}{\sigma_{1}^{2} + \sigma_{2}^{2}}}$
Travail sur les Motion Clouds et observation des différents changement de paramètre particulièrement B_sf
http://neuralensemble.github.io/MotionClouds/
Simulation d'un ring model grâce au programme brian et observation des conséquences du changement de différents paramêtres
Stage M1 de Chloé Pasturel, encadrée par Laurent Perrinet (INT, CNRS)
(importé depuis https://laurentperrinet.github.io/grant/anr-bala-v1/ )
wget -O- http://neuro.debian.net/lists/trusty.de-md.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver pgp.mit.edu 2649A5A9
sudo apt-get update
sudo apt-get install aptitude
sudo aptitude install ipython-notebook
sudo aptitude install python-matplotlib
sudo aptitude install python-brian
sudo aptitude install texlive-full
sudo aptitude install python-pynn
pip install --user neurotools
In a previous post, I have shown how to convert MoinMoin pages to this blog (Nikola angine). Let's now tidy up the place and remove obsolete pages.
Pages that were succesfully converted:
nikolFrom a previous post, we have this function to import a page from moinmoin, convert it and publish:
I have another wiki to take notes. These are written using the MoinMoin syntax, which is nice, but I found no converter from MoinMoin to anything compatible with Nikola.
Following this post http://carreau.github.io/posts/06-NBconvert-Doc-Draft.html , I thought I might give it a try using ipython within nikola:
URL = 'https://URL/cgi-bin/index.cgi/SciBlog/2005-10-29?action=print'
NOTE: this should not be tried with these URLS as they do not exist anymore...
Here, we will need to install additions to the nikola publishing tool: a rendering machine + a theme. Using homebrew in mcosx, this looks like:
brew install npm
npm install -g less
and
nikola install_theme zen-ipython
Once this was done, one could tune conf.py
and then issue:
nikola new_post -f ipynb
Finally, one needs to build (nikola build
) and publish (nikola deploy
)
plot(rand(50))