paramétrer l'e-mail à l'INT

Warning

This post is certainly obsolete...

  • indiquer les parametres:

    1. serveur IMAP imap.univmed.fr

    2. SSL (port 993)

    3. comme identifiant, l'identifiant univmed (de la forme toto.l)

    4. serveur SMTP smtp.univmed.fr

SERVEUR ENTRANT (imap)

Sélectionner IMAP et le nommer UNIV-AMU.

Saisir votre adresse mail prenom.nom@univ-amu.fr.

Votre login est du genre : nom.x ou nom (x est la première lettre de votre prénom)

Saisir le : Prénom NOM

L'adresse du serveur IMAP est : imap.univmed.fr

Le port est 993 avec chiffrement SSL et l'authentification est par mot de passe.

SERVEUR SORTANT (smtp)

Sélectionner SMTP et le nommer UNIV-AMU.

L'adresse du serveur IMAP est : smtp.univmed.fr

Votre login est du genre : nom.x ou nom

Le port est 465 avec chiffrement SSL et l'authentification est par mot de passe.

Tropique :-intervention Enghien

neurosciences?

Mon projet scientifique s'intéresse aux mécanismes computationnels qui sous-tendent la cognition. C'est-à-dire que l'on sait où se produisent ces mécanismes définissant la système nerveux central en un réseau de neurones connectés par des synapses et qu'ils sont supportés par des signaux électro-chimiques entre ces noeuds, mais on ne connaît pas encore totalement comment l'information qui semble être portée ces signaux peut être interprété. Ce décodage, qui est le fond de notre travail en neurosciences, à un "Graal" qui est la découverte d'un hypothétique "code neural", c'est-à-dire du langage qui est utilisé dans notre cerveau. On ne sait si cette découverte est possible; la question se pose: peut-il exister une connaissance globale du cerveau à la manière d'autres disciplines scientifiques (par exemple, la trajectoire d'une planète avec les lois de Newton?). Il est clair que le cerveau de chaque humain n'est pas assez complexe pour en délimiter la complexité, même les cerveaux mis en réseau avec toutes les communautés neuro-scientifiques, artistiques vont nous permettre dans le futur de mieux comprendre cet objet...

Nous sommes encore au Moyen-Âge d'un compréhension globale de la cognition. Il n'y a pas de brique élémentaire ou de principe universel comme cela peut l'être dans d'autres disciplines comme la mécanique, la chimie ou la logique mathématique classique. Nous sommes ici dans le domaine des sciences du complexe: on parle alors de concepts encore très jeunes par rapport à l'âge de l'humanité comme l'utilisation de mesures d'information, l'auto-organisation ou l'émergence.

des axes

Il y a de nombreuses perspectives pour le découvrir progressivement et je suis particulièrement interessé par ces axes:

  • La découverte d'algorithmes neuraux permet de construire de nouveaux paradigmes de calcul. En effet une chose que l'on sait du cerveau est qu'il n'est pas un ordinateur! au moins il n'est pas un ordinateur classique (de von Neumann) où toute l'information passe par un (ou un petit nombre) de processeurs à très grande vitesse. À le place de cela, l'architecture du système nerveux est massivement parallèle, asynchrone et adaptative. Ces nouveaux algorithmes pourront être implantées sur des puces de nouvelle génération qui sont actuellement en développement.

  • Une meilleure connaissance des mécanismes ouvre bien sûr la voie à de nombreuses applications thérapeutiques et sur un large spectre depuis le contrôle des épilepsie jusqu'à la compréhension des dégénérescences neurales. Nous appliquons au laboratoire cette démarche scientifique en nous concentrant sur les bases de la vision et en particulier sur la capacité à détecter le mouvement.

  • On le voit notre démarche scientifique est relativement large est si elle est appliquée à un cas particulier (la détection de mouvement), nous faisons en sorte qu'elle puisse toujours être approchée de façon générique à d'autres problèmes: d'autres modalités sensorielles ou cognitives mais surtout d'autres échelles d'analyses, du très petit (l'interaction de sous-parties d'un neurone) au très grand (interactions sociales).

Voilà pour une brève présentation.

Tropique

Il y a donc une grande proximité du champ d'action avec la démarche artistique d'Etienne Rey qui a conduit à l'émergence de ce projet. J'étais au début surpris de l'utilisation de mots clés (diffraction, particule, résonance, émergence, ...) et pensais qu'il étaient plus utilisés pour le pouvoir poétique de leur évocation. En fait, au cours des discussions nous nous sommes rendus compte que nous parlions le même langage et qu'une voie s'ouvre si nous confrontions nos perspectives en redéfinissant ce qui ne l'est pas encore précisément: c'est l'intérêt de Tropique en tant que chercheur en neuroscience: un espace de création dans la mise en oeuvre du projet dans la définition du "cerveau artificiel" qui va le contrôler, un espace de création imprévisible qui va naître de l'interaction avec le public.

  • la phase de gestion de l'information du mouvement de plusieurs acteurs est une prouesse technologique qui sera une épreuve du feu pour les algorithmes neuro-mimétiques que nous développons. En particulier, le concept de particule élémentaire d'information de mouvement pourra montrer son utilité à un niveau pratique,

  • explorer en pratique la résonance entre Perception et Action. Ces deux facettes de la cognition qui sont gravées dans l'anatomie du cerveau sont indissociables. Instinct Paradise donne un espace d'expérimentation qui nous permet de manipuler directement la perception d'espace d'un personne (son "aura") ainsi que ses interactions. A la manière d'une fractale nous envisageons de transposer ce niveau d'interactions sociales inter-personnes (10mx10m) sur un modèles d'interactions neurales (1cmx1cm) sur des règles similaires élémentaires de diffusion/agrégation, on envisage en particulier utiliser les données enregstrées pour les interpréter / voir les différences entre lieux, temps, configurations

  • c'est une aventure humaine, une série d'échanges, un projet que nous voulons donner à partager. À mon niveau, c'est aussi pour la reconnaissance qu'il soit porté par les institutions . À l'heure ou le seul espace public pour la science sont le mysticisme de jumeaux lipo-chirurgés ou le scpeticisme industriuex d'un ex-minstre géologiquement mamouthé, c'est un réel bonheur qu'on puisse monter un projet qui me permette de présenter quelques avancées sur notre connaissance du cerveau. Finalement, mon intérêt est aussi de pouvoir partager une bière au bar de la Friche pour discuter à bâtons rompus de concepts métaphysiques puis de plonger sur un détail très spécifique de la construction d'un détecteur ou d'imaginer les scénarios possibles d'interaction.

installing Dovecot on debian

Warning

This post is certainly obsolete...

  • Configure

    sudo vim /etc/dovecot/dovecot.conf
  • Mine reads (it's just meant to access imap files from the local mail server and not to serve outside the localhost):

    protocols = imaps
    listen = localhost:10943
    mail_location = maildir:~/Maildir
    protocol imap {
        ssl_listen = *:993
    }
    ssl_disable = no
    ssl_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem
    ssl_key_file = /etc/ssl/private/ssl-cert-snakeoil.key
  • Reload

    sudo service dovecot restart

Compiling pyglet on MacOsX

  • you may get errors if trying to install pyglet using the traditional way, using pip for instance (was my case on MacOs X Lion 10.7.0 + python 64bits from EPD or homebrew). in cause is the carbon code that has been abandonned in the 64bits libraries that come with the OS

Warning

This post is certainly obsolete...

computational and theoretical neuroscience

the poll

  • Dear list
    
    A recent paper in PLoS Computational Biology
    
    > The Roots of Bioinformatics in Theoretical Biology
    > Paulien Hogeweg
    > Volume 7(3) March 2011 http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002021
    
    makes a point in the evolution of the meaning of the field of bioinformatics with the advent of data-driven modeling.
    
    The same seems to have appeared in computational neuroscience. The sense slowly drifted from the original papers (such as Science, Vol. 241, No. 4871, 1988, pp. 1299-1306. by T. J. Sejnowski, C. Koch, P. S. Churchland) which I believed is perfectly captured in the sentence: "The ultimate aim of computational neuroscience is to explain how electrical and chemical signals are used in the brain to represent and process information."  (this does not exclude using computers of course).
    
    It seems to be solely a semantical problem, but this may generate some confusion (realpolitik translation: "and this may hinder the efficiency of your grant proposal"). Recently an (anonymous) colleague told me they called their group "computational AND theoretical neuroscience" (just as these two fields where separated) out of the lack of consensus on the meaning of words and to not exclude anyone. Nowadays, even in the university, there is a continuum of fields combining biology,  mathematics or computer science and all computational neuroscientists reflect this as individuals. so what's the situation in 2011?
    
    I often asked to fellow colleagues this question, "what is computational neuroscience?" and often got one of these answers (I try to be unbiased - please correct me):
    
     [ ] it is a field of neuroscience involving the use of computers (von Neumann machines, Dell boxes, macbooks, ...) to simulate and analyze data obtained from experimental neuroscience and advance our knowledge from this dialogue. Theoretical neuroscience is different in the sense that it proposes mathematical models of how it works.
    
     [ ] it is the field of neuroscience studying how information is represented and processed in neural activity. This involves a dialogue with experimental neuroscience to analyze and propose experiments. It proposes models, that is theories for the relation between function and structure. Theoretical neuroscience is a subset of computational neuroscience that tries to express these models in standard mathematical language.
    
     [ ] it is some field of neuroscience and why would you care to give an exact definition? its frontiers are moving and it has many facets, theoretical neuroscience being just one example. it cares about being less ignorant on relation between function and structure in neuroscience.
    
    If you want to express your opinion, you can so in one click:
    https://spreadsheets.google.com/viewform?formkey=dDc5X2dJRS1zMHRiSndSNERWelBkQlE6MQ
    results :
    https://spreadsheets.google.com/lv?key=0AueMPskll6yrdDc5X2dJRS1zMHRiSndSNERWelBkQlE&hl=fr&f=0&rm=full#gid=0
    
    cheers,
    Laurent

results

  • I have received 36 responses with the following results (see https://spreadsheets.google.com/lv?key=0AueMPskll6yrdDc5X2dJRS1zMHRiSndSNERWelBkQlE&hl=fr&f=0&rm=full#gid=0 ):

    1. [ 5 ] it is a field of neuroscience involving the use of computers (von Neumann machines, Dell boxes, macbooks, ...) to simulate and analyze data obtained from experimental neuroscience and advance our knowledge from this dialogue. Theoretical neuroscience is different in the sense that it proposes mathematical models of how it works.

    2. [ 21 ] it is the field of neuroscience studying how information is represented and processed in neural activity. This involves a dialogue with experimental neuroscience to analyze and propose experiments. It proposes models, that is theories for the relation between function and structure. Theoretical neuroscience is a subset of computational neuroscience that tries to express these models in standard mathematical language.

      • one answer is amended by "is a sub-field of theoretical neuroscience that seeks to understand how information is represented and processed in the nervous system by implementing and testing theories in the form of computer simulations."

      • another answer comments "Not a definition, but a comment. Since Comp. Neurosci. (or whatever it should be called -- some people now call it Neurodynamics) is already such a small field and barely represented on the map, I think it is foolish to further subdivide it. That is why I like the first definition a bit more, such that both are collected under one roof."

    3. [ 6 ] it is some field of neuroscience and why would you care to give an exact definition? its frontiers are moving and it has many facets, theoretical neuroscience being just one example. it cares about being less ignorant on relation between function and structure in neuroscience.

    4. [ 4 ] other free-form answers were given:

      1. with a slight modification to the first definition

        I agree with definition (1) except the "Theoretical neuroscience is a subset of ..." as I might argue that "Computational neuroscience is a subset of theoretical neuroscience".
      2. with a new definition close to the aims of theoretical neuroscience

        it is field of neuroscience that use mathematical models to analyze the data obtained from experimental neuroscience. Therefore, it gives a logical result to it and it can be explained instead to be as a magic box
      3. to a strict "computational" view

        It is the subset of theoretical neuroscience that hypothesises that the brain is a computer. This relates to the first definition to the extent that 'computation' is identified with 'information processing'. Theoretical neuroscience is simply the development of models (in any form, including mathematics or computer simulations) of neural processes. It is possible for a process to be simulated or analyzed using a computer - see the second definition - without claiming the process itself is an example of computation. 'Computational neuroscience' usually implies this stronger claim, though it is now often used more loosely (definition three).
      4. or to the interesting view that these different approaches overlap but correspond to different approaches:

        In my view, theoretical neuroscience is the non-experimenting version of neuroscience, much like theoretical physics is the non-experimenting version of physics.
        
        I would argue that theoretical neuroscience and computational neuroscience are different in their approaches.
        
        Computational neuroscience has a strong focus on simulation. It is the "virtual" extension of electrophysiology. The modeling philosophies of GENESIS and Neuron clearly reflect this. So called "biologically realistic" simulations are the gold standard in computational neuroscience.
        
        Theoretical neuroscience, by contrast, has its focus on mathematical descriptions and properties of nervous structures. Theoretical neuroscience starts, when the experiments, real or simulated, are done. The excellent books of Henry Tuckwell illustrate this. Here, simulation is not the method of choice, but the last resort after all pencils are broken and all paper is used up ;-)
        
        At best, I would say that there is overlap between the two rather than that one comprises the other.
        
        And while we are at definitions. Why not add the re-incarnated term of "Neuroinformatics" to the contest?

analysis

mercurial & LaTeX

Warning

This post is certainly obsolete...

  1. Just a add the following lines to your Makefile

    HGID:=$(shell hg parents -R .. --template "Mercurial revision {rev} - date: {date|isodate}")
    hgid.tex:dummy
            [ -f $@ ] || touch $@
            echo '\\renewcommand{\hgid}{$(HGID)}' > $@
    dummy: ;
  2. and this lines to your main tex file

    \newcommand{\hgid}{null}
    \input{hgid}

    now one can use the command \hgid to get the version everywhere.

  3. for instance

    \newcommand{\HRule}{\rule{\linewidth}{0.5mm}}
    \usepackage{fancyhdr}
    \pagestyle{fancyplain}
    \fancyhead{}
    \chead{{\sc This a DRAFT, please do not distribute.}}
    \cfoot{\HRule \\ \hgid}

using a versioning system

Warning

This post is certainly obsolete...

using SubVersion (SVN)

Version Control is an everyday tool to handle your important source files. It is useful:

  • to grab the latest source code from open source projects and to always keep up-to-date,

  • to share a bunch of files (a set of latex source code, python scripts, ...) allowing to work on different computers with different persons,

  • to keep track of revisions from your project.

SVN: Getting help

SVN: 2 minutes guide using the commandline

  • to just access a remote repository and make a local working copy ("checking-out") in a local my_projects folder, do

    cd my_projects/
    svn checkout svn+ssh://myname@svnserver/path/to/svn/project my_localcopy

    this basically copies the current version of the remote copy to your local computer as the folder my_localcopy. Then, you just need to issue svn up in this folder to stay up-to-date.

  • to create from scratch your own folder in an existing repository (, do

    cd my_projects/
    svn add new_project
    svn ci new_project -m'Committing my modifications'

    . Then you just need to issue the svn ci new_project -m'Committing my modifications' command to commit new modifications to the server. Don't forget to svn add newfiles or to svn rm obsolete_files.

random SVN tips

  • roll-back to a previous version (e.g. 3421) of a file myfile:

    svn up -r 3421 myfile
  • Create a repository

    • use the database backend

      svnadmin create /ih/funk/svn/projects
  • use the filesystem backend

    svnadmin create --fs-type=fsfs PATH
  • Import a revision

    svn import -m "Initial import" Eccos file:///ih/funk/svn/projects
  • Check out a revision

    svn co file:///ih/funk/svn/projects
  • Dump a repository

     svnadmin dump /ih/funk/svn/projects | gzip -9 > dump.gz
     svnadmin dump /ih/funk/svn/projects | gzip -9 > `date "+Eccosdump%Y-%m-%d_%H:%M:%S.gz"`
    svnadmin dump /ih/funk/svn/projects | gzip -9 > `date "+projects_dump%Y-%m-%d_%H:%M:%S.gz"`
  • Load contents of a dump into a repository

    gunzip -c dump.gz | svnadmin load /data/svn/projects
  • Import from an existing directory, no need to check it out again

    • It should work, but you could also check it out right into /etc. Something

      like this:

      $ svnadmin create /var/svnrepos/admin
      $ svn mkdir -m "initial setup" file:///var/svnrepos/admin/trunk
      c:> svn mkdir -m "initial setup" file:///c:/fhs/svn_repos/trunk
      $ cd /etc
      $ svn co file:///var/svnrepos/admin/trunk .
      $ svn add passwd group
      $ svn commit -m "start loading it in"
      
      I tested the 'svn co' into '.' just now. Works great.
  • svn propset

    svn propset svn:keywords "LastChangedDate LastChangedRevision Id Author" weather.txt
    svn propset svn:keywords "LastChangedDate LastChangedRevision Id" slides.tex
  • Before an update you could use the following to get the log messages of the changes:

    svn log -rBASE:HEAD
  • Upgrade to a new subversion version

    $ mv repos repos.tmp
    $ svnadmin create repos
    $ svnadmin-old dump repos.tmp | svnadmin load repos
    $ # copy over any hook scripts and stuff from repos.tmp to repos
  • Checkout from a repository over ssh

    svn co svn+ssh://felix/home/reichr/svn_repos/XSteveData/trunk data
  • Change the path of the repository for a working copy

    svn switch --relocate file:///original/path/to/repos file:///new/path

    WARNING: this will not work if file:///original/path/to/repos is not exactly the original URL. BE sure to check before with svn info.

  • Network a repository via svn+ssh:

    • create the repository on the repository host:

      svnadmin create rp1  -- this is located at /home/svtest/rp1
    • Import data to the repository:

      svn import -m"Initial import" svn+ssh://svtest@host/rp1/trunk
    • Checkout the project:

      svn co svn+ssh://svtest@host/home/svtest/rp1/trunk p1
  • Generate a patch to undo some local changes and redo them later: What usually happens to me is that I've changed N files in M different > directories distributed all over the filesystem, and I want to check in N-1 of them. If I need to commit all but one file, I do this:

    % svn diff path/to/file_not_committing > /tmp/patch.txt
    % svn revert path/to/file_not_committing
    % svn ci -m "committing all the stuff i wanted to"
    % patch -p0 < /tmp/patch.txt

    Revert is your friend. Learn it, use it, looooooooooove it.

  • Revert to a previous version

    svn co project
    <edit foo.c, adding bugs>
    svn ci foo.c (commits to r348)
    <realize terrible error>
    svn merge -r348:347 foo.c
    svn ci foo.c (commits 349)

    note the ordering of the revision numbers in the merge command. what this really says is "make a diff between revision 348 and 347, and apply it immediately to foo.c" if you are trying to revert a directory tree with moves or deletes in it, and are getting arcane errors, try the --ignore-ancestry flag.

  • Edit the commit/log messages after the commit Read chapter 7, regarding unversioned properties attached to revisions. You want to change the svn:log property:

    $ svn propedit -r N --revprop svn:log URL

an alternative : Git

git clone url

svn checkout url

git pull

svn up

git commit

svn commit

git push url

(no such a thing)

  • to set-up

    git config --global user.name "Your Name Comes Here"
    git config --global user.email you@yourdomain.example.com
    git config --global color.diff auto
    git config --global color.status auto
    git config --global color.branch auto

using Git with SVN

  • install git-svn and use

    git svn fetch

scripting MoinMoin to get, change or rename pages

  • MoinMoin is hugely useful for day to day use. Scripting is even better. Here, I show how to get, edit and rename pages on your wiki. To avoid bad surprise, this is based on a copy of the remote server using a local server with a wikiconfig.py script.

  • it heavily uses examples shown in http://moinmo.in/MoinAPI/Examples?highlight=%28xmlrpc%29

  • first define the server and import the library

    1 wikiurl = "http://localhost:8080"
    2 username, password = 'YourName', 'yur)s3cr3t-pwd'
    3
    4 import xmlrpclib
  • let's try to read a page

     1     pagename = u'NewsEvents' # not protected
     2     pagename = u'Publications/Perrinet06ciotat' # protected
     3     homewiki = xmlrpclib.ServerProxy(wikiurl + "?action=xmlrpc2", allow_none=True)
     4     auth_token = homewiki.getAuthToken(username, password)
     5     mc = xmlrpclib.MultiCall(homewiki)
     6     mc.applyAuthToken(auth_token)
     7     mc.getPage(pagename)
     8     result = mc()
     9     success, raw = tuple(result)
    10     if isinstance(result, tuple) and tuple(result)[0] == "SUCCESS":
    11         print "reading page '%s' : %s" % (pagename, tuple(result)[0])
    12     else:
    13         print tuple(result)[0]
  • and now to write another one

     1     pagename = u'TestingPage'
     2     text = """
     3     This is a line of TEXT
     4
     5 AND     This is another line of text
     6
     7     """
     8     homewiki = xmlrpclib.ServerProxy(wikiurl + "?action=xmlrpc2", allow_none=True)
     9     auth_token = homewiki.getAuthToken(username, password)
    10     mc = xmlrpclib.MultiCall(homewiki)
    11     mc.applyAuthToken(auth_token)
    12     mc.putPage(pagename, text)
    13     result = mc()
    14     if isinstance(result, tuple) and tuple(result)[0] == "SUCCESS":
    15         print "page '%s' created: %s" % (pagename, tuple(result)[0])
    16     else:
    17         print 'You did not change the page content, not saved!'
  • so we may now read a page, replace some text and write it

     1     old, new = 'Category', 'Tag'
     2
     3     homewiki = xmlrpclib.ServerProxy(wikiurl + "?action=xmlrpc2", allow_none=True)
     4     auth_token = homewiki.getAuthToken(username, password)
     5     mc = xmlrpclib.MultiCall(homewiki)
     6     mc.applyAuthToken(auth_token)
     7     mc.getPage(pagename)
     8     result = mc()
     9     if tuple(result)[0] == "SUCCESS":
    10         print "page '%s' to modify: %s" % (pagename, tuple(result)[0])
    11         raw = tuple(result)[1]
    12         if raw.find(old)>-1:
    13             raw = raw.replace(old, new)
    14 #            print raw
    15             mc.putPage(pagename, raw)
    16             result = mc()
    17             print result[0]
    18         else:
    19             print 'not modified'
    20     else:
    21         print tuple(result)[0]
  • let's now do that on the whole website

     1     old, new = '^= reference =$', '^== reference ==$'
     2     homewiki = xmlrpclib.ServerProxy(wikiurl + "?action=xmlrpc2", allow_none=True)
     3     auth_token = homewiki.getAuthToken(username, password)
     4     mc = xmlrpclib.MultiCall(homewiki)
     5     mc.applyAuthToken(auth_token)
     6     mc.getAllPages()#opts={'include_system':False, 'include_underlay':False})
     7     result = mc()
     8     pagelist = tuple(result)[1]
     9     for pagename in pagelist:
    10         homewiki = xmlrpclib.ServerProxy(wikiurl + "?action=xmlrpc2", allow_none=True)
    11         auth_token = homewiki.getAuthToken(username, password)
    12         mc = xmlrpclib.MultiCall(homewiki)
    13         mc.applyAuthToken(auth_token)
    14         mc.getPage(pagename)
    15         try:
    16             result = mc()
    17             if tuple(result)[0] == "SUCCESS":
    18                 raw = tuple(result)[1]
    19                 if raw.find(old)>-1:
    20                     raw = raw.replace(old, new)
    21                     mc.applyAuthToken(auth_token)
    22                     mc.putPage(pagename, raw)
    23                     result = mc()
    24                     print ":-) page '%s' modified: %s" % (pagename, tuple(result)[0])
    25             else:
    26                 print tuple(result)[0]
    27         except:
    28             print 'failed', pagename
  • let's now rename one page

    1     homewiki = xmlrpclib.ServerProxy(wikiurl + "?action=xmlrpc2", allow_none=True)
    2     auth_token = homewiki.getAuthToken(username, password)
    3     mc = xmlrpclib.MultiCall(homewiki)
    4     mc.applyAuthToken(auth_token)
    5     mc.renamePage(u'TestingPage', u'TestPage2')
    6     result = mc()
    7     print result[0]
  • and now some more pages (here to reflect changes in the links)

     1     homewiki = xmlrpclib.ServerProxy(wikiurl + "?action=xmlrpc2", allow_none=True)
     2     auth_token = homewiki.getAuthToken(username, password)
     3     mc = xmlrpclib.MultiCall(homewiki)
     4     old, new = 'Category', 'Tag'
     5     for pagename in homewiki.getAllPages():
     6         if pagename.find(old)>-1:
     7             mc = xmlrpclib.MultiCall(homewiki)
     8             mc.applyAuthToken(auth_token)
     9             mc.renamePage(pagename, pagename.replace(old, new))
    10             result = mc()
    11             print ":-) page '%s' modified: %s" % (pagename, tuple(result)[0])

Publications 2006-2010

articles

  1. Publications/Barthelemy07

  2. Publications/Cessac07

  3. Publications/Daucé10

  4. Publications/Davison08

  5. Publications/Fischer07

  6. Publications/Fischer07cv

  7. Publications/Kremkow10jcns

  8. Publications/Montagnini07

  9. Publications/Perrinet06

  10. Publications/Perrinet07neurocomp

  11. Publications/Perrinet10shl

  12. Publications/Voges10neurocomp

  13. SciBlog/2011-07-05

references of full articles

  • Laurent Perrinet. Dynamical Neural Networks: modeling low-level vision at short latencies, URL . pages 163--225.

  • Frédéric Barthélemy, Laurent Perrinet, Éric Castet, Guillaume S. Masson. Dynamics of distributed 1D and 2D motion representations for short-latency ocular following, URL URL2 URL3 . Vision Research, 48(4):501--22, 2008

  • Anna Montagnini, Pascal Mamassian, Laurent U. Perrinet, Eric Castet, Guillaume S. Masson. Bayesian modeling of dynamic motion integration, URL . Journal of Physiology (Paris), 101(1-3):64-77, 2007

    The quality of the representation of an object's motion is limited by the noise in the sensory input as well as by an intrinsic ambiguity due to the spatial limi- tation of the visual motion analyzers (aperture prob- lem). Perceptual and oculomotor data demonstrate that motion processing of extended ob jects is initially dominated by the local 1D motion cues orthogonal to the ob ject's edges, whereas 2D information takes pro- gressively over and leads to the final correct represen- tation of global motion. A Bayesian framework ac- counting for the sensory noise and general expectancies for ob ject velocities has proven successful in explaining several experimental findings concerning early motion processing [1, 2, 3]. However, a complete functional model, encompassing the dynamical evolution of ob- ject motion perception is still lacking. Here we outline several experimental observations concerning human smooth pursuit of moving ob jects and more particu- larly the time course of its initiation phase. In addi- tion, we propose a recursive extension of the Bayesian model, motivated and constrained by our oculomotor data, to describe the dynamical integration of 1D and 2D motion information.

  • Laurent Perrinet, Guillaume S. Masson. Modeling spatial integration in the ocular following response using a probabilistic framework, URL . Journal of Physiology (Paris), 2007

    The machinery behind the visual perception of motion and the subsequent sensori-motor transformation, such as in Ocular Following Response (OFR), is confronted to uncertainties which are efficiently resolved in the primate's visual system. We may understand this response as an ideal observer in a probabilis- tic framework by using Bayesian theory (Weiss et al., 2002) which we previously proved to be successfully adapted to model the OFR for different levels of noise with full field gratings (Perrinet et al., 2005). More recent experiments of OFR have used disk gratings and bipartite stimuli which are optimized to study the dy- namics of center-surround integration. We quantified two main characteristics of the spatial integration of motion : (i) a finite optimal stimulus size for driving OFR, surrounded by an antagonistic modulation and (ii) a direction selective sup- pressive effect of the surround on the contrast gain control of the central stim- uli (Barth\'elemy et al., 2006). Herein, we extended the ideal observer model to simulate the spatial integration of the different local motion cues within a proba- bilistic representation. We present analytical results which show that the hypoth- esis of independence of local measures can describe the integration of the spatial motion signal. Within this framework, we successfully accounted for the con- trast gain control mechanisms observed in the behavioral data for center-surround stimuli. However, another inhibitory mechanism had to be added to account for suppressive effects of the surround.

  • B. Cessac, E. Daucé, Laurent U. Perrinet, M. Samuelides. Topics in Dynamical Neural Networks: From Large Scale Neural Networks to Motor Control and Vision, `URL <https://laurentperrinet.github.io/publication/cessac-07>`__ `URL2 <http://www.springerlink.com/content/q00921n9886h/?p=03c19c7c204d4fa78b850f88b97da2f7π=0>`__ . Springer Berlin / Heidelberg, 2007.

  • Andrew P Davison, Daniel Bruderle, Jochen Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, Pierre Yger. PyNN: A Common Interface for Neuronal Network Simulators., URL . Frontiers in Neuroinformatics, 2:11, 2008

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.

  • Sylvain Fischer, Rafael Redondo, Laurent Perrinet, Gabriel Crist\'obal. Sparse approximation of images inspired from the functional architecture of the primary visual areas, URL . EURASIP Journal on Advances in Signal Processing, special issue on Image Perception, :Article ID 90727, 16 pages, 2007

  • Sylvain Fischer, Filip Sroubek, Laurent U. Perrinet, Rafael Redondo, Gabriel Crist\'obal. Self-invertible 2D log-Gabor wavelets, URL . Int. Journal of Computional Vision, 2007

  • Nicole Voges, Laurent Perrinet. Phase space analysis of networks based on biologically realistic parameters, URL . Journal of Physiology (Paris), 104(1-2):51--60, 2010

    We study cortical network dynamics for a more realistic network model. It represents, in terms of spatial scale, a large piece of cortex allowing for long-range connections, resulting in a rather sparse connectivity. We use two different types of conductance-based I&F neurons as excitatory and in- hibitory units, as well as specific connection probabilities. In order to re- main computationally tractable, we reduce neuron density, modelling part of the missing internal input via external poissonian spike trains. Compared to previous studies, we observe significant changes in the dynamical phase space: Altered activity patterns require another regularity measure than the coefficient of variation. We identify two types of mixed states, where differ- ent phases coexist in certain regions of the phase space. More notably, our boundary between high and low activity states depends predominantly on the relation between excitatory and inhibitory synaptic strength instead of the input rate.Key words: Artificial neural networks, Data analysis, Simulation, Spiking neurons. This work is supported by EC IP project FP6-015879 (FACETS).

  • Emmanuel Daucé, Laurent Perrinet. Computational Neuroscience, from Multiple Levels to Multi-level, URL . Journal of Physiology (Paris), 104(1--2):1--4, 2010

    Despite the long and fruitful history of neuroscience, a global, multi-level description of cardinal brain functions is still far from reach. Using analytical or numerical approaches, \emphComputational Neuroscience aims at the emergence of such common principles by using concepts from Dynamical Systems and Information Theory. The aim of this Special Issue of the Journal of Physiology (Paris) is to reflect the latest advances in this field which has been presented during the NeuroComp08 conference that took place in October 2008 in Marseille (France). By highlighting a selection of works presented at the conference, we wish to illustrate the intrinsic diversity of this field of research but also the need of an unification effort that is becoming more and more necessary to understand the brain in its full complexity, from multiple levels of description to a multi-level understanding.

  • Laurent U. Perrinet. Role of homeostasis in learning sparse representations, URL . Neural Computation, 22(7):1812--36, 2010

    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding coupled with Hebbian learning and homeostasis have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism which optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair: By contributing to optimize statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.

references of articles and proceedings

  • Pierre Yger, Daniel Bruderle, Jochen Eppler, Jens Kremkow, Dejan Pecevski, Laurent Perrinet, Michael Schmuker, Eilif Muller, Andrew P Davison. NeuralEnsemble: Towards a meta-environment for network modeling and data analysis, URL . In Eighth Göttingen Meeting of the German Neuroscience Society, pages T26-4C. 2009 NeuralEnsemble (http://neuralensemble.org) is a multilateral effort to coordinate and organise neuroscience software development efforts based around the Python programming language into a larger, meta-simulator software system. To this end, NeuralEnsemble hosts services for source code management and bug tracking (Subversion/Trac) for a number of open-source neuroscience tools, organizes an annual workshop devoted to collaborative software development in neuroscience, and manages a google-group discussion forum. Here, we present two NeuralEnsemble hosted projects:PyNN (http://neuralensemble.org/PyNN) is a package for simulator-independent specification of neuronal network models. You can write the code for a model once, using the PyNN API, and then run it without modification on any simulator that PyNN supports. Currently NEURON, NEST, PCSIM and a VLSI hardware implementation are fully supported.NeuroTools (http://neuralensemble.org/NeuroTools) is a set of tools to manage, store and analyse computational neuroscience simulations. It has been designed around PyNN, but can also be used for data from other simulation environments or even electrophysiological measurements.We will illustrate how the use of PyNN and NeuroTools ease the developmental process of models in computational neuroscience, enhancing collaboration between different groups and increasing the confidence in correctness of results. NeuralEnsemble efforts are supported by the European FACETS project (EU-IST-2005-15879)

  • Adrien Wohrer, Guillaume Masson, Laurent Perrinet, Pierre Kornprobst, Thierry Vieville. Contrast sensitivity adaptation in a virtual spiking retina and its adequation with mammalians retinas. In Perception, pages 67. 2009

  • Nicole Voges, Laurent Perrinet. Phase space analysis of networks based on biologically realistic parameters, URL . Journal of Physiology (Paris), 104(1-2):51--60, 2010

    We study cortical network dynamics for a more realistic network model. It represents, in terms of spatial scale, a large piece of cortex allowing for long-range connections, resulting in a rather sparse connectivity. We use two different types of conductance-based I&F neurons as excitatory and in- hibitory units, as well as specific connection probabilities. In order to re- main computationally tractable, we reduce neuron density, modelling part of the missing internal input via external poissonian spike trains. Compared to previous studies, we observe significant changes in the dynamical phase space: Altered activity patterns require another regularity measure than the coefficient of variation. We identify two types of mixed states, where differ- ent phases coexist in certain regions of the phase space. More notably, our boundary between high and low activity states depends predominantly on the relation between excitatory and inhibitory synaptic strength instead of the input rate.Key words: Artificial neural networks, Data analysis, Simulation, Spiking neurons. This work is supported by EC IP project FP6-015879 (FACETS).

  • Nicole Voges, Laurent Perrinet. Dynamics of cortical networks including long-range patchy connections, URL . In Eighth Göttingen Meeting of the German Neuroscience Society, pages T26-3C. 2009 Most studies of cortical network dynamics are either based on purely random wiring or neighborhood couplings [1], focussing on a rather local scale. Neuronal connections in the cortex, however, show a more complex spatial pattern composed of local and long-range patchy connections [2,3] as shown in the figure: It represents a tracer injection (gray areas) in the GM of a flattened cortex (top view): Black dots indicate neuron positions, blue lines their patchy axonal ramifications, and red lines represent the local connections. Moreover, to include distant synapses, one has to enlarge the spatial scale from the typically assumed 1mm to 5mm side length.As it is our aim to analyze more realistic network models of the cortex we assume a distance dependent connectivity that reflects the geometry of dendritesand axons [3]. Here, we ask to what extent the assumption of specific geometric traits influences the resulting dynamical behavior of these networks. Analyzing various characteristic measures that describe spiking neurons (e.g., coefficient of variation, correlation coefficient), we compare the dynamical state spaces of different connectivity types: purely random or purely local couplings, a combination of local and distant synapses, and connectivity structures with patchy projections.On top of biologically realistic background states, a stimulus is applied in order to analyze their stabilities. As previous studies [1], we also find different dynamical states depending on the external input rate and the numerical relation between excitatory and inhibitory synaptic weights. Preliminary results indicate, however, that transitions between these states are much sharper in case of local or patchy couplings.This work is supported by EU Grant 15879 (FACETS). Thanks to Stefan Rotter who supervised the PhD project [3] this work is based on. Network dynamics are simulated with NEST/PyNN [4].[1] A. Kumar, S. Schrader, A. Aertsen and S. Rotter, Neural Computation 20, 2008, 1-43. [2] T. Binzegger, R.J. Douglas and K.A.C. Martin, J. of Neurosci., 27(45), 2007, 12242-12254. [3] Voges N, Fakultaet fuer Biologie, Albert-Ludwigs-Universitaet Freiburg, 2007. [4] NEST. M.O. Gewaltig and M. Diesmann, Scholarpedia 2(4):1430.

  • Nicole Voges, Laurent U. Perrinet. Dynamical state spaces of cortical networks representing various horizontal connectivities, URL . In Proceedings of COSYNE, 2009

    Most studies of cortical network dynamics are either based on purely random wiring or neighborhood couplings, e.g., [Kumar, Schrader, Aer tsen, Rotter, 2008, Neural Computation 20, 1--43]. Neuronal connections in the cortex, however, show a complex spatial pattern composed of local and long-range connections, the latter featuring a so-called patchy projection pattern, i.e., spatially clustered synapses [Binzegger, Douglas, Martin, 2007, J. Neurosci. 27(45), 12242--12254]. The idea of our project is to provide and to analyze probabilistic network models that more adequately represent horizontal connectivity in the cortex. In particular, we investigate the effect of specific projection patterns on the dynamical state space of cortical networks. Assuming an enlarged spatial scale we employ a distance dependent connectivity that reflects the geometr y of dendrites and axons. We simulate the network dynamics using a neuronal network simulator NEST/PyNN. Our models are composed of conductance based integrate-and-fire neurons, representing fast spiking inhibitor y and regular spiking excitator y cells. In order to compare the dynamical state spaces of previous studies with our network models we consider the following connectivity assumptions: purely random or purely local couplings, a combination of local and distant synapses, and connectivity structures with patchy projections. Similar to previous studies, we also find different dynamical states depending on the input parameters: the external input rate and the numerical relation between excitatory and inhibitory synaptic weights. These states, e.g., synchronous regular (SR) or asynchronous irregular (AI) firing, are characterized by measures like the mean firing rate, the correlation coefficient, the coefficient of variation and so forth. On top of identified biologically realistic background states (AI), stimuli are applied in order to analyze their stability. Comparing the results of our different network models we find that the parameter space necessary to describe all possible dynamical states of a network is much more concentrated if local couplings are involved. The transition between different states is shifted (with respect to both input parameters) and sharpened in dependence of the relative amount of local couplings. Local couplings strongly enhance the mean firing rate, and lead to smaller values of the correlation coefficient. In terms of emergence of synchronous states, however, networks with local versus non-local or patchy versus random remote connections exhibit a higher probability of synchronized spiking. Concerning stability, preliminary results indicate that again networks with local or patchy connections show a higher probability of changing from the AI to the SR state. We conclude that the combination of local and remote projections bears important consequences on the activity of network: The apparent differences we found for distinct connectivity assumptions in the dynamical state spaces suggest that network dynamics strongly depend on the connectivity structure. This effect might be even stronger with respect to the spatio-temporal spread of signal propagation. This work is suppor ted by EC IP project FP6-015879 (FACETS).

  • Nicole Voges, Laurent Perrinet. Recurrent cortical networks with realistic horizontal connectivities show complex dynamics, URL . In Eighteenth Annual Computational Neuroscience Meeting: CNS*2009 Berlin, Germany. 18–23 July 2009, pages T26-3C + 10(Suppl 1):P176. 2009 Most studies on the dynamics of recurrent cortical networks are either based on purely randomwiring or neighborhood couplings. They deal with a local spatial scale, where approx.10% of all possible connections are realized. Neuronal wiring in the cortex, however, shows acomplex spatial pattern composed of local and long-range patchy connections, i.e. spatiallyclustered synapses.We ask to what extent such geometric traits influence the ’idle’ dynamics of cortical networkmodels. Assuming an enlarged spatial scale we consider distinct network architectures, rang-ing from purely random to distance dependent connectivities with patchy projections. Thelatter are tuned to reflect the axonal arborizations present in layer 2/3 of cat V1. We con-sider different types of conductance based integrate-and-fire neurons with distance-dependentsynaptic delays.Analyzing the characteristic measures describing spiking neuronal networks (e.g. correlations,regularity), we explore and compare the phase spaces and activity patterns of different typesof network models. To examine stability and signal propagation properties we additionallyapplied local activity injections.Similar to previous studies we observe synchronous regular firing (SR state) for large νext andlow inhibition, while small νext combined with large g results in asynchronous irregular firing(AI). Our SRslow and SI state, the occurrence of ’mixed’ states, and the more vertical phasespace border significantly differ from previous findings.

  • Nicole Voges, Laurent U. Perrinet. Analyzing cortical network dynamics with respect to different connectivity assumptions, URL . In Proceedings of the second french conference on Computational Neuroscience, Marseille, 2008

  • Nicole Voges, Jens Kremkow, Laurent U. Perrinet. Dynamics of cortical networks based on patchy connectivity patterns. In FENS Abstract, 2008

  • Claudio Simoncini, Laurent U. Perrinet, Anna Montagnini, Pascal Mamassian, Guillaume S. Masson. Different pooling of motion information for perceptual speed discrimination and behavioral speed estimation. In Vision Science Society, 2010

  • Laurent U. Perrinet. Role of homeostasis in learning sparse representations, URL . Neural Computation, 22(7):1812--36, 2010

    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding coupled with Hebbian learning and homeostasis have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism which optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair: By contributing to optimize statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.

  • Laurent Perrinet, Guillaume S. Masson. Dynamical emergence of a neural solution for motion integration, URL . In Proceedings of AREADNE, 2010

  • Laurent Perrinet. Qui créera le premier calculateur intelligent?, URL . DocSciences, (13), 2010

  • Laurent Perrinet, Alexandre Reynaud, Frédéric Chavane, Guillaume S. Masson. Inferring monkey ocular following responses from V1 population dynamics using a probabilistic model of motion integration, URL . In Vision Science Society, 2009

    Short presentation of a large moving pattern elicits an ocular following response that exhibits many of the properties attributed to low-level motion processing such as spatial and temporal integration, contrast gain control and divisive interaction between competing motions. Similar mechanisms have been demonstrated in V1 cortical activity in response to center-surround gratings patterns measured with real-time optical imaging in awake monkeys (see poster of Reynaud et al., VSS09). Based on a previously developed Bayesian framework, we have developed an optimal statistical decoder of such an observed cortical population activity as recorded by optical imaging. This model aims at characterizing the statistical dependence between early neuronal activity and ocular responses and its performance was analyzed by comparing this neuronal read-out and the actual motor responses on a trial-by-trial basis. First, we show that relative performance of the behavioral contrast response function is similar to the best estimate obtained from the neural activity. In particular, we show that the latency of ocular response increases with low contrast conditions as well as with noisier instances of the behavioral task as decoded by the model. Then, we investigate the temporal dynamics of both neuronal and motor responses and show how motion information as represented by the model is integrated in space to improve population decoding over time. Lastly, we explore how a surrounding velocity non congruous with the central excitation information shunts the ocular response and how it is topographically represented in the cortical activity. Acknowledgment: European integrated project FACETS IST-15879.

  • Laurent Perrinet, Nicole Voges, Jens Kremkow, Guillaume S. Masson. Decoding center-surround interactions in population of neurons for the ocular following response , URL . In Proceedings of COSYNE, 2009

    Short presentation of a large moving pattern elicits an Ocular Following Response (OFR) that exhibits many of the properties attributed to low-level motion processing such as spatial and temporal integration, contrast gain control and divisive interaction between competing motions. Similar mechanisms have been demonstrated in V1 cortical activity in response to center-surround gratings patterns measured with real-time optical imaging in awake monkeys. More recent experiments of OFR have used disk gratings and bipartite stimuli which are optimized to study the dynamics of center-surround integration. We quantified two main characteristics of the global spatial integration of motion from an intermediate map of possible local translation velocities: (i) a finite optimal stimulus size for driving OFR, surrounded by an antagonistic modulation and (ii) a direction selective suppressive effect of the surround on the contrast gain control of the central stimuli [Barthelemy06,Barthelemy07].In fact, the machinery behind the visual perception of motion and the subsequent sensorimotor transformation is confronted to uncertainties which are efficiently resolved in the primate's visual system. We may understand this response as an ideal observer in a probabilistic framework by using Bayesian theory [Weiss02] and we extended in the dynamical domain the ideal observer model to simulate the spatial integration of the different local motion cues within a probabilistic representation. We proved that this model is successfully adapted to model the OFR for the different experiments [Perrinet07neurocomp], that is for different levels of noise with full field gratings, with disks of various sizes and also for the effect of a flickering surround. However, another \emphad hoc inhibitory mechanism has to be added in this model to account for suppressive effects of the surround.We explore here an hypothesis where this could be understood as the effect of a recurrent prediction of information in the velocity map. In fact, in previous models, the integration step assumes independence of the local information while natural scenes are very predictable: Due to the rigidity and inertia of physical objects in visual space, neighboring local spatiotemporal information is redundant and one may introduce this \empha priori knowledge of the statistics of the input in the ideal observer model. We implement this in a realistic model of a layer representing velocities in a map of cortical columns, where predictions are implemented by lateral interactions within the cortical area. First, raw velocities are estimated locally from images and are propagated to this area in a feed-forward manner. Using this velocity map, we progressively learn the dependence of local velocities in a second layer of the model. This algorithm is cyclic since the prediction is using the local velocities which are themselves using both the feed-forward input and the prediction: We control the convergence of this process by measuring results for different learning rate. Results show that this simple model is sufficient to disambiguate characteristic patterns such as the Barber-Pole illusion. Due to the recursive network which is modulating the velocity map, it also explains that the representation may exhibit some memory, such as when an object suddenly disappears or when presenting a dot followed by a line (line-motion illusion).Finally, we applied this model that was tuned over a set of natural scenes to gratings of increasing sizes. We observed first that the feed-forward response as tuned to neurophysiological data gave lower responses at higher eccentricities, and that this effect was greater for higher grating frequencies. Then, we observed that depending on the size of the disk and on its spatial frequency, the recurrent network of lateral interactions Lastly, we explore how a surrounding velocity non congruous with the central excitation information shunts the ocular response and how it is topographically represented in the cortical activity. ,

  • Laurent Perrinet, Guillaume S. Masson. Decoding the population dynamics underlying ocular following response using a probabilistic framework. In Eighteenth Annual Computational Neuroscience Meeting: CNS*2009 Berlin, Germany. 18--23 July 2009, pages 10(Suppl 1):P359. 2009

  • Laurent Perrinet. Adaptive Sparse Spike Coding : applications of Neuroscience to the compression of natural images, URL . In Optical and Digital Image Processing Conference 7000 - Proceedings of SPIE Volume 7000, 7 - 11 April 2008, pages 15 - S4. 2008 If modern computers are sometimes superior to cognition in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing a relative or following an object in a complex background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the efficiency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will particularly focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.

  • Laurent Perrinet, Guillaume S. Masson. Modeling spatial integration in the ocular following response to center-surround stimulation using a probabilistic framework, URL . In Proceedings of COSYNE, 2008, 2008,

  • Laurent Perrinet. What adaptive code for efficient spiking representations? A model for the formation of receptive fields of simple cells., URL . In Proceedings of COSYNE, 2008

  • Laurent Perrinet, Guillaume S. Masson. Decoding the population dynamics underlying ocular following responseusing a probabilistic framework, URL . In Proceedings of AREADNE, 2008

  • Laurent Perrinet, Guillaume S. Masson. Modeling spatial integration in the ocular following response using a probabilistic framework, URL . Journal of Physiology (Paris), 2007

    The machinery behind the visual perception of motion and the subsequent sensori-motor transformation, such as in Ocular Following Response (OFR), is confronted to uncertainties which are efficiently resolved in the primate's visual system. We may understand this response as an ideal observer in a probabilis- tic framework by using Bayesian theory (Weiss et al., 2002) which we previously proved to be successfully adapted to model the OFR for different levels of noise with full field gratings (Perrinet et al., 2005). More recent experiments of OFR have used disk gratings and bipartite stimuli which are optimized to study the dy- namics of center-surround integration. We quantified two main characteristics of the spatial integration of motion : (i) a finite optimal stimulus size for driving OFR, surrounded by an antagonistic modulation and (ii) a direction selective sup- pressive effect of the surround on the contrast gain control of the central stim- uli (Barth\'elemy et al., 2006). Herein, we extended the ideal observer model to simulate the spatial integration of the different local motion cues within a proba- bilistic representation. We present analytical results which show that the hypoth- esis of independence of local measures can describe the integration of the spatial motion signal. Within this framework, we successfully accounted for the con- trast gain control mechanisms observed in the behavioral data for center-surround stimuli. However, another inhibitory mechanism had to be added to account for suppressive effects of the surround.

  • Laurent Perrinet. On efficient sparse spike coding schemes for learning natural scenes in the primary visual cortex, URL . In Sixteenth Annual Computational Neuroscience Meeting: CNS*2007, Toronto, Canada. 7--12 July 2007, 2007

    We describe the theoretical formulation of a learning algorithm in a model of the primary visual cortex (V1) and present results of the efficiency of this algorithm by comparing it to the SparseNet algorithm [1]. As the SparseNet algorithm, it is based on a model of signal synthesis as a Linear Generative Model but differs in the efficiency criteria for the representation. This learning algorithm is in fact based on an efficiency criteria based on the Occam razor: for a similar quality, the shortest representation should be privileged. This inverse problem is NP-complete and we propose here a greedy solution which is based on the architecture and nature of neural computations [2]). It proposes that the supra-threshold neural activity progressively removes redundancies in the representation based on a correlation-based inhibition and provides a dynamical implementation close to the concept of neural assemblies from Hebb [3]). We present here results of simulation of this network with small natural images (available at https://laurentperrinet.github.io/publication/perrinet-19-hulk) and compare it to the Sparsenet solution. Extending it to realistic images and to the NEST simulator http://www.nest-initiative.org/, we show that this learning algorithm based on the properties of neural computations produces adaptive and efficient representations in V1. 1. Olshausen B, Field DJ: Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Res 1997, 37:3311-3325.2. Perrinet L: Feature detection using spikes: the greedy approach. J Physiol Paris 2004, 98(4–6):530-539.3. Hebb DO: The organization of behavior. Wiley, New York; 1949.

  • Laurent Perrinet, Frédéric V. Barthélemy, Guillaume S. Masson. Input-output transformation in the visuo-oculomotor loop: modeling the ocular following response to center-surround stimulation in a probabilistic framework. In 1ère conférence francophone NEUROsciences COMPutationnelles - NeuroComp, 2006

    The quality of the representation of an object's motion is limited by the noise in the sensory input as well as by an intrinsic ambiguity due to the spatial limi- tation of the visual motion analyzers (aperture prob- lem). Perceptual and oculomotor data demonstrate that motion processing of extended ob jects is initially dominated by the local 1D motion cues orthogonal to the ob ject's edges, whereas 2D information takes pro- gressively over and leads to the final correct represen- tation of global motion. A Bayesian framework ac- counting for the sensory noise and general expectancies for ob ject velocities has proven successful in explaining several experimental findings concerning early motion processing [1, 2, 3]. However, a complete functional model, encompassing the dynamical evolution of ob- ject motion perception is still lacking. Here we outline several experimental observations concerning human smooth pursuit of moving ob jects and more particu- larly the time course of its initiation phase. In addi- tion, we propose a recursive extension of the Bayesian model, motivated and constrained by our oculomotor data, to describe the dynamical integration of 1D and 2D motion information.

  • Laurent Perrinet, Jens Kremkow, Frédéric Barthélemy, Guillaume S. Masson, Frédéric Chavane. Input-output transformation in the visuo-oculomotor loop: modeling the ocular following response to center-surround stimulation in a probabilistic framework. In FENS, 2006

  • Laurent Perrinet, Jens Kremkow. Dynamical contrast gain control mechanisms in a layer 2/3 model of the primary visual cortex. In The Functional Architecture of the Brain : from Dendrites to Networks. Symposium in honour of Dr Suzanne Tyc-Dumont. 4- 5 May 2006. GLM, Marseille, France, 2006 Computations in a cortical column are characterized by the dynamical, event-based nature of neuronal signals and are structured by the layered and parallel structure of cortical areas. But they are also characterized by their efficiency in terms of rapidity and robustness. We propose and study here a model of information integration in the primary visual cortex (V1) thanks to the parallel and interconnected network of similar cortical columns. In particular, we focus on the dynamics of contrast gain control mechanisms as a function of the distribution of information relevance in a small population of cortical columns. This cortical area is modeled as a collection of similar cortical columns which receive input and are linked according to a specific connectivity pattern which is relevant to this area. These columns are simulated using the \sc Nest simulator \citepMorrison04 using conductance-based Integrate-and-Fire neurons and consist vertically in 3 different layers. The architecture was inspired by neuro-physiological observations on the influence of neighboring activities on pyramidal cells activity and correlates with the lateral flow of information observed in the primary visual cortex, notably in optical imaging experiments \citepJancke04, and is similar in its final implementation to local micro-circuitry of the cortical column presented by \citetGrossberg05. %They show prototypical spontaneous dynamical behavior to different levels of noise which are relevant to the generic modeling of biological cortical columns \citepKremkow05. In the future, the connectivity will be derived from an algorithm that was used for modeling the transient spiking response of a layer of neurons to a flashed image and which was based on the Matching Pursuit algorithm \citepPerrinet04. %The visual input is first transmitted from the Lateral Geniculate Nucleus (LGN) using the model of \citetGazeres98. It transforms the image flow into a stream of spikes with contrast gain control mechanisms specific to the retina and the LGN. This spiking activity converges to the pyramidal cells of layer 2/3 thanks to the specification of receptive fields in layer 4 providing a preference for oriented local contrasts in the spatio-temporal visual flow. In particular, we use in these experiments visual input organized in a center-surround spatial pattern which was optimized in size to maximize the response of a column in the center and to the modulation of this response by the surround (bipartite stimulus). This class of stimuli provide different levels of input activation and of visual ambiguity in the visual space which were present in the spatio-temporal correlations in the input spike flow optimized to the resolution of cortical columns in the visual space. It thus provides a method to reveal the dynamics of information integration and particularly of contrast gain control which are characteristic to the function of V1.

  • Laurent Perrinet. An efficiency razor for model selection and adaptation in the primary visual cortex. In Fifteenth Annual Computational Neuroscience Meeting, 2006

    We describe the theoretical formulation of a learning algorithm in a model of the primary visual cortex (V1) and present results of the efficiency of this algorithm by comparing it to the Sparsenet algorithm (Olshausen, 1996). As the Sparsenet algorithm, it is based on a model of signal synthesis as a Linear Generative Model but differs in the efficiency criteria for the representation. This learning algorithm is in fact based on an efficiency criteria based on the Occam razor: for a similar quality, the shortest representation should be privilegied. This inverse problem is NP-complete and we propose here a greedy solution which is based on the architecture and nature of neural computations (Perrinet, 2006). We present here results of a simulation of this network of small natural images (available at https://laurentperrinet.github.io/publication/perrinet-19-hulk ) and compare it to the Sparsenet solution. We show that this solution based on neural computations produces an adaptive algorithm for efficient representations in V1.

  • Laurent Perrinet, Jens Kremkow. Dynamical contrast gain control mechanisms in a layer 2/3 model of the primary visual cortex. In Physiogenic and pathogenic oscillations: the beauty and the beast, 5th INMED/TINS CONFERENCE SEPTEMBER 9 - 12, 2006, La Ciotat, France, 2006

  • Laurent Perrinet. Dynamical Neural Networks: modeling low-level vision at short latencies, URL . pages 163--225.

  • Anna Montagnini, Pascal Mamassian, Laurent U. Perrinet, Eric Castet, Guillaume S. Masson. Bayesian modeling of dynamic motion integration, URL . Journal of Physiology (Paris), 101(1-3):64-77, 2007

    The quality of the representation of an object's motion is limited by the noise in the sensory input as well as by an intrinsic ambiguity due to the spatial limi- tation of the visual motion analyzers (aperture prob- lem). Perceptual and oculomotor data demonstrate that motion processing of extended ob jects is initially dominated by the local 1D motion cues orthogonal to the ob ject's edges, whereas 2D information takes pro- gressively over and leads to the final correct represen- tation of global motion. A Bayesian framework ac- counting for the sensory noise and general expectancies for ob ject velocities has proven successful in explaining several experimental findings concerning early motion processing [1, 2, 3]. However, a complete functional model, encompassing the dynamical evolution of ob- ject motion perception is still lacking. Here we outline several experimental observations concerning human smooth pursuit of moving ob jects and more particu- larly the time course of its initiation phase. In addi- tion, we propose a recursive extension of the Bayesian model, motivated and constrained by our oculomotor data, to describe the dynamical integration of 1D and 2D motion information.

  • Jens Kremkow, Laurent U. Perrinet, Guillaume S. Masson, Ad Aertsen. Functional consequences of correlated excitatory and inhibitory conductances in cortical networks, URL . Journal of Computational Neuroscience, 28(3):579-94, 2010

    Neurons in the neocortex receive a large number of excitatory and inhibitory synaptic inputs. Excitation and inhibition dynamically balance each other, with inhibition lagging excitation by only few milliseconds. To characterize the functional consequences of such correlated excitation and inhibition, we studied models in which this correlation structure is induced by feedforward inhibition (FFI). Simple circuits show that an effective FFI changes the integrative behavior of neurons such that only synchronous inputs can elicit spikes, causing the responses to be sparse and precise. Further, effective FFI increases the selectivity for propagation of synchrony through a feedforward network, thereby increasing the stability to background activity. Last, we show that recurrent random networks with effective inhibition are more likely to exhibit dynamical network activity states as have been observed in vivo. Thus, when a feedforward signal path is embedded in such recurrent network, the stabilizing effect of effective inhibition creates an suitable substrate for signal propagation. In conclusion, correlated excitation and inhibition support the notion that synchronous spiking may be important for cortical processing.

  • Jens Kremkow. Correlating Excitation and Inhibition in Visual Cortical Circuits: Functional Consequences and Biological Feasibility, PhD thesis. 2009 The primary visual cortex (V1) is one of the most studied cortical area in the brain. Together with the retina and the lateral geniculate nucleus (LGN) it forms the early visual system. Artificial stimuli (i.e. drifting gratings (DG)) have given insights into the neural basis of visual processing. However, recently researchers have started to use more complex natural visual stimuli (NI), arguing that the low dimensional artificial stimuli are not sufficient for a complete understanding of the visual system.For example, whereas the responses of V1 neurons to DG are dense but with variable spike timings, the neurons respond with only few but precise spikes to NI. Furthermore, linear receptive field models provide a good fit to responses during simple stimuli, however, they often fail during NI. To investigate the mechanisms behind the stimulus dependent responses of cortical neurons we have built a biophysical model of the early visual system.Our results show that during NI the LGN afferents show epochs of correlated activity, resulting in precise spike timings in V1. The sparseness of the responses to NI can be explained by correlated inhibitory conductance. We continue by investigating the origin of stimulus dependent nonlinear responses, by comparing models of different complexity. Our results suggest that adaptive processes shape the responses, depending on the temporal properties of the stimuli. Lastly we study the functional consequences of correlated excitatory and inhibitory condutances in more details in generic models.The presented work gives new perspectives on the processing of the early visual system, in particular on the importance of correlated conductances.

  • Jens Kremkow, Laurent Perrinet, Guillaume S. Masson, Ad Aertsen. Functional consequences of correlated excitation and inhibition on single neuron integration and signal propagation through synfire chains, URL . In Eighth Göttingen Meeting of the German Neuroscience Society, pages T26-6B. 2009 Neurons receive a large number of excitatory and inhibitory synaptic inputs whose temporal interplay determines their spiking behavior. On average, excitation (Gexc) and inhibition (Ginh) balance each other, such that spikes are elicited by fluctuations [1]. In addition, it has been shown in vivo that Gexc and Ginh are correlated, with Ginh lagging Gexc only by few milliseconds (6ms), creating a small temporal integration window [2,3]. This correlation structure could be induced by feed-forward inhibition (FFI), which has been shown to be present at many sites in the central nervous system.To characterize the functional consequences of the FFI, we first modeled a simple circuit using spiking neurons with conductance based synapses and studied the effect on the single neuron integration. We then coupled many of such circuits to construct a feed-forward network (synfire chain [4,5]) and investigated the effect of FFI on signal propagation along such feed-forward network.We found that the small temporal integration window, induced by the FFI, changes the integrative properties of the neuron. Only transient stimuli could produce a response when the FFI was active whereas without FFI the neuron responded to both steady and transient stimuli. Due to the increase in selectivity to transient inputs, the conditions of signal propagation through the feed-forward network changed as well. Whereas synchronous inputs could reliable propagate, high asynchronous input rates, which are known to induce synfire activity [6], failed to do so. In summary, the FFI increased the stability of the synfire chain.Supported by DFG SFB 780, EU-15879-FACETS, BMBF 01GQ0420 to BCCN Freiburg[1] Kumar A., Schrader S., Aertsen A. and Rotter S. (2008). The high-conductance state of cortical networks. Neural Computation, 20(1):1--43. [2] Okun M. and Lampl I. (2008). Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nat Neurosci, 11(5):535--7.[3] Baudot P., Levy M., Marre O., Monier C. and Fr\'egnac (2008). submitted. [4] Abeles M. (1991). Corticonics: Neural circuits of the cerebral cortex. Cambridge, UK [5] Diesmann M., Gewaltig M-O and Aertsen A. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature, 402(6761):529--33. [6] Kumar A., Rotter S. and Aertsen A. (2008), Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model. J Neurosci 28 (20), 5268--80.,

  • Jens Kremkow, Laurent Perrinet, Cyril Monier, Yves Fregnac, Guillaume S. Masson, Ad Aertsen. Control of the temporal interplay between excitation and inhibition by the statistics of visual input, URL . URL2 . In Eighteenth Annual Computational Neuroscience Meeting: CNS*2009 Berlin, Germany. 18–23 July 2009, pages Oral presentation, 10(Suppl 1):O21. 2009

  • Jens Kremkow, Laurent Perrinet, Alexandre Reynaud, Ad Aertsen, Guillaume S. Masson, Frédéric Chavane. Dynamics of non-linear cortico-cortical interactions during motion integration in early visual cortex: A spiking neuron model of an optical imaging study in the awake monkey, URL . URL2 . In Eighteenth Annual Computational Neuroscience Meeting: CNS*2009 Berlin, Germany. 18–23 July 2009, pages 10(Suppl 1):P176. 2009

  • Jens Kremkow, Laurent Perrinet, Pierre Baudot, Manu Levy, Olivier Marre, Cyril Monier, Yves Fregnac, Guillaume Masson, Ad Aertsen. Control of the temporal interplay between excitation and inhibition by the statistics of visual input: a V1 network modelling study, URL . In Proceedings of the Society for Neuroscience conference, 2008

    In the primary visual cortex (V1), single cell responses to simple visual stimuli (gratings) are usually dense but with a high trial-by-trial variability. In contrast, when exposed to full field natural scenes, the firing patterns of these neurons are sparse but highly reproducible over trials (Marre et al., 2005; Fr\'egnac et al., 2006). It is still not understood how these two classes of stimuli can elicit these two distinct firing behaviours. A common model for simple-cell computation in layer 4 is the ``push-pull'' circuitry (Troyer et al. 1998). It accounts for the observed anti-phase behaviour between excitatory and inhibitory conductances in response to a drifting grating (Anderson et al., 2000; Monier et al., 2008), creating a wide temporal integration window during which excitation is integrated without the shunting or opponent effect of inhibition and allowed to elicit multiple spikes. This is in contrast to recent results from intracellular recordings in vivo during presentation of natural scenes (Baudot et al., submitted). Here the excitatory and inhibitory conductances were highly correlated, with inhibition lagging excitation only by few milliseconds (~6 ms). This small lag creates a narrow temporal integration window such that only synchronized excitatory inputs can elicit a spike, similar to parallel observations in other cortical sensory areas (Wehr and Zador, 2003; Okun and Lampl, 2008). To investigate the cellular and network mechanisms underlying these two different correlation structures, we constructed a realistic model of the V1 network using spiking neurons with conductance based synapses. We calibrated our model to fit the irregular ongoing activity pattern as well as in vivo conductance measurements during drifting grating stimulation and then extracted predicted responses to natural scenes seen through eye-movements. Our simulations reproduced the above described experimental observation, together with anti-phase behaviour between excitation and inhibition during gratings and phase lagged activation during natural scenes. In conclusion, the same cortical network that shows dense and variable responses to gratings exhibits sparse and precise spiking to natural scenes. Work is under way to show to which extent this feature is specific for the feedforward vs recurrent nature of the modelled circuit. ,

  • Jens Kremkow, Laurent U. Perrinet, Ad Aertsen, Guillaume S. Masson. Functional properties of feed-forward inhibition, URL . In Proceedings of the second french conference on Computational Neuroscience, Marseille, 2008

  • Jens Kremkow, Laurent Perrinet, Arvind Kumar, Ad Aertsen, Guillaume Masson. Synchrony in thalamic inputs enhances propagation of activity through cortical layers, URL URL2 . In Sixteenth Annual Computational Neuroscience Meeting: CNS*2007, Toronto, Canada. 7--12 July 2007, 2007

    Sensory input enters the cortex via the thalamocortical (TC) projection, where it elicits large postsynaptic potentials in layer 4 neurons [1]. Interestingly, the TC connections account for only 15% of synapses onto these neurons. It has been therefore controversially discussed how thalamic input can drive the cortex. Strong TC synapses have been one suggestion to ensure the strength of the TC projection ("strong-synapse model"). Another possibility is that the excitation from single thalamic fibers are weak but get amplified by recurrent excitatory feedback in layer 4 ("amplifier model"). Bruno and Sakmann [2] recently provided new evidence that individual TC synapses in vivo are weak and only produce small excitatory postsynaptic potentials. However, they suggested that thalamic input can activate the cortex due to the synchronous firing and that cortical amplification is not required. This would support the "synchrony model" proposed by correlation analysis [3].Here, we studied the effect of correlation in the TC input, with weak synapses, to the responses of a layered cortical network model. The connectivity of the layered network was taken from Binzegger et al. 2004 [4]. The network was simulated using NEST [5] with the Python interface PyNN [6] to enable interoperability with different simulators. The sensory input to layer 4 was modelled by a simple retino-geniculate model of the transformation of light into spike trains [7], which was implemented by leaky integrate-and-fire model neurons.We found that introducing correlation into TC inputs enhanced the likelihood to produce responses in layer 4 and improved the activity propagation across layers. In addition, we compared the response of the cortical network to different noise conditions and obtained contrast response functions which were in accordance with neurophysiological observations. This Work is supported by the 6th RFP of the EU (grant no. 15879-FACETS) and by the BMBF grant 01GQ0420 to the BCCN Freiburg.1. Chung S, Ferster D: Strength and orientation tuning of the thalamic input to simple cells revealed by electrically evoked cortical suppression. Neuron 1998, 20:1177-1189. 2. Bruno M, Sakmann B: Cortex is driven by weak but synchronously active thalamocortical synpases. Science 2006, 312:1622-1627. 3. Alonso JM, Usrey WM, Reid RC: Precisely correlated firing in cells of the lateral geniculate nucleus. Nature 1996, 383:815-819. 4. Binzegger T, Douglas RJ, Martin KAC: A quantitative map of the circuit of the cat primary visual cortex. J Neurosci 2004, 24:8441-8453. 5. NEST http://www.nest-initiative.org. PyNN http://neuralensemble.org/PyNN. Gazeres N, Borg-Graham LJ, Fr\'egnac Y: A phenomenological model of visually evoked spike trains in cat geniculate nonlagged X-cells.Vis Neurosci 1998, 15:1157-1174.

  • Mina Aliakbari Khoei, Laurent Perrinet, Guillaume S. Masson. Dynamical emergence of a neural solution for motion integration, URL . In Proceedings of Tauc, 2010

  • Sylvain Fischer, Filip Sroubek, Laurent U. Perrinet, Rafael Redondo, Gabriel Crist\'obal. Self-invertible 2D log-Gabor wavelets, URL . Int. Journal of Computional Vision, 2007

  • Sylvain Fischer, Rafael Redondo, Laurent Perrinet, Gabriel Crist\'obal. Sparse approximation of images inspired from the functional architecture of the primary visual areas, URL URL2 . EURASIP Journal on Advances in Signal Processing, special issue on Image Perception, :Article ID 90727, 16 pages, 2007

  • Andrew P Davison, Daniel Bruderle, Jochen Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, Pierre Yger. PyNN: A Common Interface for Neuronal Network Simulators., URL . Frontiers in Neuroinformatics, 2:11, 2008

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.

  • Andrew Davison, Pierre Yger, Jens Kremkow, Laurent Perrinet, Eilif Muller. PyNN: towards a universal neural simulator API in Python, URL URL2 . In Sixteenth Annual Computational Neuroscience Meeting: CNS*2007, Toronto, Canada. 7--12 July 2007, 2007

    Trends in programming language development and adoption point to Python as the high-level systems integration language of choice. Python leverages a vast developer-base external to the neuroscience community, and promises leaps in simulation complexity and maintainability to any neural simulator that adopts it. PyNN http://neuralensemble.org/PyNN strives to provide a uniform application programming interface (API) across neural simulators. Presently NEURON and NEST are supported, and support for other simulators and neuromorphic VLSI hardware is under development.With PyNN it is possible to write a simulation script once and run it without modification on any supported simulator. It is also possible to write a script that uses capabilities specific to a single simulator. While this sacrifices simulator-independence, it adds flexibility, and can be a useful step in porting models between simulators. The design goals of PyNN include allowing access to low-level details of a simulation where necessary, while providing the capability to model at a high level of abstraction, with concomitant gains in development speed and simulation maintainability.Another of our aims with PyNN is to increase the productivity of neuroscience modeling, by making it faster to develop models de novo, by promoting code sharing and reuse across simulator communities, and by making it much easier to debug, test and validate simulations by running them on more than one simulator. Modelers would then become free to devote more software development effort to innovation, building on the simulator core with new tools such as network topology databases, stimulus programming, analysis and visualization tools, and simulation accounting. The resulting, community-developed 'meta-simulator' system would then represent a powerful tool for overcoming the so-called complexity bottleneck that is presently a major roadblock for neural modeling.

  • Emmanuel Daucé, Laurent Perrinet. Computational Neuroscience, from Multiple Levels to Multi-level, URL . Journal of Physiology (Paris), 104(1--2):1--4, 2010

    Despite the long and fruitful history of neuroscience, a global, multi-level description of cardinal brain functions is still far from reach. Using analytical or numerical approaches, \emphComputational Neuroscience aims at the emergence of such common principles by using concepts from Dynamical Systems and Information Theory. The aim of this Special Issue of the Journal of Physiology (Paris) is to reflect the latest advances in this field which has been presented during the NeuroComp08 conference that took place in October 2008 in Marseille (France). By highlighting a selection of works presented at the conference, we wish to illustrate the intrinsic diversity of this field of research but also the need of an unification effort that is becoming more and more necessary to understand the brain in its full complexity, from multiple levels of description to a multi-level understanding.

  • B. Cessac, E. Daucé, Laurent U. Perrinet, M. Samuelides. Topics in Dynamical Neural Networks: From Large Scale Neural Networks to Motor Control and Vision, `URL <https://laurentperrinet.github.io/publication/cessac-07>`__ . . Springer Berlin / Heidelberg, 2007.

  • Amarender Bogadhi, Anna Montagnini, Pascal Mamassian, Laurent U. Perrinet, Guillaume S. Masson. A recurrent Bayesian model of dynamic motion integration for smooth pursuit. In Vision Science Society, 2010

  • Amarender Bogadhi, Anna Montagnini, Pascal Mamassian, Laurent U. Perrinet, Guillaume S. Masson. Pursuing motion illusions: a realistic oculomotor framework for Bayesian inference, URL . Vision Research, 51(8):867--80, 2011

    Accuracy in estimating an object's global motion over time is not only affected by the noise in visual motion information but also by the spatial limitation of the local motion analyzers (aperture problem). Perceptual and oculomotor data demonstrate that during the initial stages of the motion information processing, 1D motion cues related to the object's edges have a dominating influence over the estimate of the object's global motion. However, during the later stages, 2D motion cues related to terminators (edge-endings) progressively take over, leading to a final correct estimate of the object's global motion. Here, we propose a recursive extension to the Bayesian framework for motion processing (Weiss, Simoncelli, & Adelson, 2002) cascaded with a model oculomotor plant to describe the dynamic integration of 1D and 2D motion information in the context of smooth pursuit eye movements. In the recurrent Bayesian framework, the prior defined in the velocity space is combined with the two independent measurement likelihood functions, representing edge-related and terminator-related information, respectively to obtain the posterior. The prior is updated with the posterior at the end of each iteration step. The maximum-a posteriori (MAP) of the posterior distribution at every time step is fed into the oculomotor plant to produce eye velocity responses that are compared to the human smooth pursuit data. The recurrent model was tuned with the variance of pursuit responses to either "pure" 1D or "pure" 2D motion. The oculomotor plant was tuned with an independent set of oculomotor data, including the effects of line length (i.e. stimulus energy) and directional anisotropies in the smooth pursuit responses. The model not only provides an accurate qualitative account of dynamic motion integration but also a quantitative account that is close to the smooth pursuit response across several conditions (three contrasts and three speeds) for two human subjects.

  • Frédéric Barthélemy, Laurent Perrinet, Éric Castet, Guillaume S. Masson. Dynamics of distributed 1D and 2D motion representations for short-latency ocular following, URL . Vision Research, 48(4):501--22, 2008

HomeBrew: compiling a python toolchain

Warning

This post is certainly obsolete...

# install python through HomeBrew as a framework
brew install python --framework
mkdir ~/Frameworks
ln -s "/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework" ~/Frameworks





# bootstrap pip
/usr/local/share/python/easy_install pip
/usr/local/share/python/pip install --upgrade distribute

# libraries
brew install gfortran
pip install -U ipython

# useful packages
pip install -U nose
pip install -U progressbar
easy_install pyreport
easy_install -f http://dist.plone.org/thirdparty/ -U PIL==1.1.7
pip install -U mercurial

# numpy et al
pip install -U numpy
pip install -U scipy
pip install -U -e git+git@github.com:matplotlib/matplotlib.git#egg=matplotlib
# pip install -f http://downloads.sourceforge.net/project/matplotlib/matplotlib/matplotlib-1.0/matplotlib-1.0.0.tar.gz matplotlib

# IDE
pip install -U sphinx pyflakes rope
brew install sip
brew install pyqt
pip install -U spyder

# mayavi
brew install vtk --python
pip install -U traitsbackendqt
pip install -U configobj
pip install  -U "Mayavi[app]"

SpikeStream & Nemo

  • SpikeStream & Nemo are (ultra fast) neural simulation frameworks. cool. but how to compile on ubuntu?

Warning

This post is certainly obsolete...

Ermentrout : "Double or Nothing: Phosphenes and the periodic driving of cortex"

  • GATSBY UNIT EXTERNAL SEMINAR

    Bard Ermentrout
    Department of Mathematics
    University of Pittsburgh
    
    Wednesday 9 March 2011, 16:00
    
    Seminar Room, Wellcome Trust Centre for Neuroimaging (FIL)
    12 Queen Square, London, WC1N 3AR
    
    Title:
    
    Double or Nothing: Phosphenes and the periodic driving of cortex
    
    Abstract:
    
    In this talk, I examine two different types of phosphenes - patterns in the visual systems evoked from within it. I first study contour phosphenes in which direct stimulation of the eyeball coupled with a moving bar in the visual field produces slowly propagating waves. The mechanism appears to be due to period doubling which produces an intrinsic bistability. Using averaging, I analyze the dynamics of a one-dimensional analog. In the second part of the talk, I study flicker-induced hallucinations in which diffuse stroboscopic light is capable of  evoking spatial patterns in the visual field. I use Floquet theory and symmetric bifurcation theory to explain experiments that indicate different patterns are seen with different temporal frequencies.
  • to listen @ http://www.fields.utoronto.ca/audio/10-11/CMM_seminar/ermentrout/index.html?7;large#slideloc

  • to see @ http://av.fields.utoronto.ca/slides/10-11/CMM_seminar/ermentrout/download.pdf

  • citation

    The lively mind of the child revels in the manifold stimuli of the external world. Who does not remember, if only dimly, such games from that beautiful time? One of them, which could keep us busy at a more serious age, is as follows: I stand in bright sunlight with closed eyes and face the sun. Then I move my outstretched, somewhat separated, fingers up and down in front of the eyes, so that they are alternately illuminated and shaded. In addition to the uniform yellow-red that one expects with closed eyes, there appear beautiful regular figures that are initially difficult to define but slowly become clearer. When we continue to move the fingers, the figure becomes more complex and fills the whole visual field (Jan Purkinje, 1819)
  • solution to the wagon-wheel illusion

Change User ID and Group ID in Snow Leopard

  • find source and traget UID / GID using the id command on unix and dscl localhost read /Local/Default/Users/lup in MacOsX

Warning

This post is certainly obsolete...

  • from http://macosx.com/tech-support/change-user-idgroup-id-in-leopard/336380.html :

    dscl . -change $HOME UniqueID 41167 545
    dscl . -change $HOME PrimaryGroupID 41167 1007
    chown -R 545:1007 $HOME
  • Remember to run the chown command afterwards, or you will not be able to access your home directory. Finally, log out and log in.

  • you may have to propagate changes on other drives (backup disks and such)

Ubuntu 10.10 64bit AHCI hosted on a dell T3500

Warning

This post is certainly obsolete...

  • quoting http://ubuntuforums.org/showpost.php?p=10189331&postcount=35 : "For those (like me) who aren't familiar with grub tampering, here's what I did to make the change automatic:"

    sudo cp /boot/grub/grub.cfg /boot/grub/grub.cfg.orig
    sudo cp /etc/default/grub /etc/default/grub.orig
    sudo vi /etc/default/grub
    # < # GRUB_CMDLINE_LINUX=""
    # < GRUB_CMDLINE_LINUX="pci=nocrs"
    # ---
    # > GRUB_CMDLINE_LINUX=""
    sudo update-grub

pmset: selecting the sleep mode in Mac Os X

Warning

This post is certainly obsolete...

To select one of the different sleep modes of the Mac use the command-line tool pmset:

  • To show the current settings:

    pmset -g
  • The hibernatemode can be 3 (default: safeSleep, i.e. the RAM content is also written to disk when the lid is closed), 0 (pure RAM sleep), 1 (pure deep-sleep). Turn the safeSleep off:

    sudo pmset -a hibernatemode 0

handling processes in bash

  • Give detailed information on all python processes:

    ps -fp $(pgrep -d, -x python)
  • Make all python processes run nicer so that they do not obstruct other processes / users:

    renice 14 `pgrep python`
  • listing processes in the current bash session:

    jobs -l
  • stopping all python processes :

    pkill -s STOP python
  • resuming all python processes ( to test ... ) :

    pkill -s CONT python

python in user space

Warning

This post is certainly obsolete...

inverting colors in MacOsX

  • during a presentation, a figure may reveal more readable if you invert colors and luminance: black gets white and reverse, red becomes cyan and so on.

  • this can be done on the fly using the magical ctrl + opt + command + 8 keyboard shortcut (on a french keyboard, press ! instead of 8).

  • a property linked to universal access (see that reference pane for other tricks)

ubuntu : starting sshd at boot

  • ssh server installed but not starting at boot (I certainly messed up something):

    $ ls -l /etc/init.d/*ssh*
    -rwxr-xr-x 1 root root 3704 2010-09-14 19:20 /etc/init.d/ssh
    $ ls -l /etc/rc2.d/*ssh*
    ls: cannot access /etc/rc2.d/*ssh*: No such file or directory
    $ ls -l /etc/rc1.d/*ssh*
  • a solution is to use update-rc.d:

    usage: update-rc.d [-n] [-f] <basename> remove
           update-rc.d [-n] <basename> defaults [NN | SS KK]
           update-rc.d [-n] <basename> start|stop NN runlvl [runlvl] [...] .
           update-rc.d [-n] <basename> disable|enable [S|2|3|4|5]
                    -n: not really
                    -f: force
    
    The disable|enable API is not stable and might change in the future.
  • by issuing :

    $ sudo update-rc.d ssh defaults
    update-rc.d: warning: ssh stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (none)
     Adding system startup for /etc/init.d/ssh ...
       /etc/rc0.d/K20ssh -> ../init.d/ssh
       /etc/rc1.d/K20ssh -> ../init.d/ssh
       /etc/rc6.d/K20ssh -> ../init.d/ssh
       /etc/rc2.d/S20ssh -> ../init.d/ssh
       /etc/rc3.d/S20ssh -> ../init.d/ssh
       /etc/rc4.d/S20ssh -> ../init.d/ssh
       /etc/rc5.d/S20ssh -> ../init.d/ssh
  • should work now

    $ ls -l /etc/rc1.d/*ssh*
    lrwxrwxrwx 1 root root 13 2011-01-18 21:33 /etc/rc1.d/K20ssh -> ../init.d/ssh

how to find stuff

  • the most simple command is locate :

    locate Python.h

    ; it is based on a database updated regulalry (most often daily).

  • the most powerful is find :

    # To find all files modified in ~/Sites three days ago:
    find ~/Sites -mtime 3
    # and 10 minutes ago:
    find ~/Sites -mmin 10
    #A time specified by -n means less than, while +n means more than.
    
    #To find all files in you home directory modified within the last week use:
    find ~ -mtime -7
    find ~ -newer last-backup.log
    
    # will find all files changed (or created) since last-backup.log was files larger than 2 megabytes (4000 of these 512 byte blocks):
    find ~ -size +4000
    find empty files% find . -empty

Caps Lock, what a useless key

http://mkaz.com/archives/86/disable-caps-lock-on-mac-os-x/

  1. [Ubuntu / gnome] You should be disable it do this with System->Preferences->Keyboard->Layouts-> Options...->CapsLock key behavior

  2. [MacosX] Open System Preferences, select the Keyboard pane. Within here, click the Modifier Keys… button at the bottom. To disable the Caps Lock key, pull down the associated menu and select No Action.

ignoring a folder in SVN

  • simply issue

    svn propset svn:ignore '*' data/
  • then, you may change behavior by editing this setting:

    svn propedit svn:ignore  data/
  • then commit, this will aplly to all updated working copies

Password-less logins with OpenSSH

Because OpenSSH allows you to run commands on remote systems, showing you the results directly, as well as just logging in to systems it's ideal for automating common tasks with shellscripts and cronjobs. One thing that you probably won't want is to do though is store the remote system's password in the script. Instead you'll want to setup SSH so that you can login securely without having to give a password.

Thankfully this is very straightforward, with the use of public keys.

To enable the remote login you create a pair of keys, one of which you simply append to a file upon the remote system. When this is done you'll then be able to login without being prompted for a password - and this also includes any cronjobs you have setup to run.

If you don't already have a keypair generated you'll first of all need to create one.

To generate a new keypair you run the following command:

skx@lappy:~$ ssh-keygen -t rsa

This will prompt you for a location to save the keys, and a pass-phrase:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/skx/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/skx/.ssh/id_rsa.
Your public key has been saved in /home/skx/.ssh/id_rsa.pub.

If you accept the defaults you'll have a pair of files created, as shown above, with no passphrase. This means that the key files can be used as they are, without being "unlocked" with a password first. If you're wishing to automate things this is what you want.

Now that you have a pair of keyfiles generated, or pre-existing, you need to append the contents of the .pub file to the correct location on the remote server.

Assuming that you wish to login to the machine called mystery from your current host with the id_rsa and id_rsa.pub files you've just generated you should run the following command:

ssh-copy-id -i ~/.ssh/id_rsa.pub username@mystery

This will prompt you for the login password for the host, then copy the keyfile for you, creating the correct directory and fixing the permissions as necessary.

The contents of the keyfile will be appended to the file ~/.ssh/authorized_keys2 for RSA keys, and ~/.ssh/authorised_keys for the older DSA key types.

Once this has been done you should be able to login remotely, and run commands, without being prompted for a password:

skx@lappy:~$ ssh mystery uptime
 09:52:50 up 96 days, 13:45,  0 users,  load average: 0.00, 0.00, 0.00

Instinct Paradise :-journée IMERA du 9 nov 2010

journée IMERA du 9/11/2010

Mon projet scientifique s'intéresse aux mécanismes computationnels qui sous-tendent la cognition. C'est-à-dire que l'on sait que l'on sait où se produisent ces mécanismes définissant la système nerveux central en un réseau de neurones connectés par des synapses et qu'ils sont supportés par des signaux électro-chimiques entre ces noeuds, mais on ne connaît pas encore totalement comment l'information qui semble être portée ces signaux peut être interprété. Ce "Graal" est la découverte du "code neural" c'est-à-dire du langage qui est utilisé dans notre cerveau. On ne sait si cette découverte est possible: peut-il exister une connaissance globale du cerveau comme on peut deviner la trajectoire d'une planète avec les lois de Newton? Peut-etre le cerveau lui-même n'est pas assez complexe, même mis en réseau avec tous les neuro-scientifiques du monde entier, pour se laisser deviner... Mais il y a de nombreuses perspectives à le découvrir progressivement:

  • La découverte d'algorithmes neuraux permet de construire de nouveaux paradigmes de calcul. En effet une chose que l'on sait du cerveau est qu'il n'est pas un ordinateur! au moins il n'est pas un ordinateur classique (de von Neumann) où toute l'information passe par un (ou un petit nombre) de processeurs à très grande vitesse. À le place de cela, l'architecture du système nerveux est massivement parallèle, asynchrone et adaptative. Ces nouveaux algorithmes pourront être implantées sur des puces de nouvelle génération qui sont actuellement en développement.

  • Une meilleure connaissance des mécanismes ouvre bien sûr la voie à de nombreuses applications thérapeutiques et sur un large spectre depuis le contrôle des épilepsie jusqu'à la compréhension des dégénérescences neurales. Nous appliquons au laboratoire cette démarche scientifique en nous concentrant sur les bases de la vision et en particulier sur la capacité à détecter le mouvement.

Nous sommes encore au Moyen-Âge d'un compréhension globale de la cognition. Il n'y a pas de brique élémentaire ou de principe universel comme cela peut l'être dans d'autres disciplines comme la mécanique, la chimie ou la logique mathématique classique. Nous sommes ici dans le domaine des sciences du complexe: on parle alors de concepts encore très jeunes par rapport à l'âge de l'humanité comme l'utilisation de mesures d'information, l'auto-organisation ou l'émergence.

On le voit notre démarche scientifique est relativement large est si elle est appliquée à un cas particulier (la détection de mouvement), nous faisons en sorte qu'elle puisse toujours être approchée de façon générique à d'autres problèmes: d'autres modalités sensorielles ou cognitives mais surtout d'autres échelles d'analyses, du très petit (l'interaction de sous-parties d'un neurone) au très grand (interactions sociales).

Il y a donc une grande proximité du champ d'action avec la démarche artistique d'Etienne Rey et Franck2Louise qui a conduit à l'émergence de ce projet. J'étais au début surpris de l'utilisation de mots clés (diffraction, particule, résonance, émergence, ...) et pensais qu'il étaient plus utilisés pour le pouvoir poétique de leur évocation. En fait, au cours des discussions nous nous sommes rendus compte que nous parlions le même langage et qu'une voie s'ouvre si nous confrontions nos perspectives en redéfinissant ce qui ne l'est pas encore précisément: c'est l'intérêt de Tropique en tant que chercheur en neuroscience: un espace de création dans la mise en oeuvre du projet dans la définition du "cerveau artificiel" qui va le contrôler, un espace de création imprévisible qui va naître de l'interaction avec le public.

Plus prosaïquement mon intérêt est à plusieurs facettes:

  • la phase de gestion de l'information du mouvement de plusieurs acteurs est une prouesse technologique qui sera une épreuve du feu pour les algorithmes neuro-mimétiques que nous développons. En particulier, le concept de particule élémentaire d'information de mouvement pourra montrer son utilité à un niveau pratique,

  • explorer en pratique la résonance entre Perception et Action. Ces deux facettes de la cognition qui sont gravées dans l'anatomie du cerveau sont indissociables. Instinct Paradise donne un espace d'expérimentation qui nous permet de manipuler directement la perception d'espace d'un personne (son "aura") ainsi que ses interactions. A la manière d'une fractale nous envisageons de transposer ce niveau d'interactions sociales inter-personnes (10mx10m) sur un modèles d'interactions neurales (1cmx1cm) sur des règles similaires élémentaires de diffusion/agrégation,

  • c'est une aventure humaine, une série d'échanges, un projet que nous voulons donner à partager. À mon niveau, c'est aussi pour la reconnaissance qu'il soit porté par les institutions . À l'heure ou le seul espace public pour la science sont le mysticisme de jumeaux lipo-chirurgés ou le scpeticisme industriuex d'un ex-minstre géologiquement mamouthé, c'est un réel bonheur qu'on puisse monter un projet qui me permette de présenter quelques avancées sur notre connaissance du cerveau. Finalement, mon intérêt est aussi de pouvoir partager une bière au bar de la Friche pour discuter à bâtons rompus de concepts métaphysiques puis de plonger sur un détail très spécifique de la construction d'un détecteur ou d'imaginer les scénarios possibles d'interaction.

wma to MP3

Warning

This post is certainly obsolete...

  • http://seismic.ocean.dal.ca/~leblanc/pwp_wiki/static/upload/audio_conv.py

  • Mplayer has changed the syntax for pcm (wav) output. The pcm -aofile <filename> options has changed to -ao pcm:file=<filename> which doesn't like dos filenames (c:\bla\...). so I'm using a real hack to make a tempfile in the current directory (for windows only, *nix works normally). This script uses the syntax for the newer version of Mplayer, if you need the older syntax comment out the current method near line 286 and uncomment the one above it. This should be pretty clear when looking at the code.

./audio_conv.py -h
./audio_conv.py  -i "*.wma" -r --to-mp3 --dry-run
./audio_conv.py  -i "*.wma" -r --to-mp3
./audio_conv.py  -i "*.wma" -r --to-mp3 --normalize --delete

installing Dovecot on MacOsX using MacPorts

Warning

This post is certainly obsolete...

  • master howto: https://trac.macports.org/wiki/howto/SetupDovecot

  • Install

    sudo port install dovecot
    sudo port load dovecot
  • Configure

    sudo cp /opt/local/etc/dovecot/dovecot-example.conf  /opt/local/etc/dovecot/dovecot.conf
    
    sudo vim /opt/local/etc/dovecot/dovecot.conf
  • Mine reads (it's just meant to access imap files from the local mail server and not to serve outside the localhost):

    protocols = imap
    listen = localhost:10143
    disable_plaintext_auth = no
    ssl = no
    mail_location = maildir:~/Maildir
    protocol imap {
    }
    auth default {
      mechanisms = plain
      passdb pam {
        args = login
      }
      userdb passwd {
          args =
      }
    user = root
    dict {
    }
  • Reload

    sudo launchctl stop org.macports.dovecot
    sudo launchctl start org.macports.dovecot
  • It does not work on the first try... so read documentation

    less /opt/local//share/doc/dovecot/documentation.txt
    less /opt/local//share/doc/dovecot/auth-protocol.txt
    less /opt/local//share/doc/dovecot/wiki/PasswordDatabase.PAM.txt
  • Authentification

    ls -l /etc/pam.d/
    sudo vim /etc/pam.d/dovecot

    with /etc/pam.d/dovecot being

    auth       required       pam_permit.so
    account    required       pam_permit.so
    password   required       pam_deny.so
    session    required       pam_uwtmp.so

rsync to an alternate ssh port

  • Q: sometimes you try to copy files using rsync but the server uses an alternate port than the usual 22...

  • A: `` rsync -av -e 'ssh -p 2222' HOST:~/folder/* dest ``

Marseille : bookmarks and tips

Warning

This post is certainly obsolete...

installing python and its components

  • Python is often pre-installed on your system or easy to download. More difficult is to get the essential packages (numpy, scipy, matplotlib, ipython) and their dependencies installed. Here, I list some of the possibilities.

on MacOsX: using MacPorts

  • A basic installation procedure is to use the enthought distribution,

  • Another route is to use MacPorts. It is a generic package manager inspired by what you get using Debian's apt scheme.

  • Once installed, do on the command-line

    • on Leopard:

      sudo port install py25-pil py25-numpy py25-scipy py25-ipython py25-matplotlib +cairo+latex+tkinter
      sudo python_select python25

      (Note: you may also use python26 on Leopard).

    • on Snow Leopard:

      sudo port install py26-numpy py26-scipy py26-ipython py26-matplotlib
      sudo port install py26-pyobjc2-cocoa py26-pil py26-distribute py26-pip py26-py2app python_select
      sudo port install vtk5 +carbon +qt4_mac +python26 py26-mayavi
      
      sudo python_select python26

      to install a bunch of useful python packages.

    • to get a package that is not available through macports, do:

      sudo easy_install progressbar
  • for visionEgg :

    sudo port install py26-opengl py26-game
    sudo easy_install visionegg
  • http://ipython.scipy.org/moin/Py4Science/InstallationOSX

  • on Snow Leopard, you'll have to follow these instructions.

Windows

Debian / Ubuntu

DistUtils, PIP & Easy Install

  • most of the time, there's a setup.py file:

    python setup.py install --prefix=~
  • See http://peak.telecommunity.com/DevCenter/EasyInstall

  • to install numpy (same for pylab, scipy, or visionegg), simply do

    easy_install numpy
  • most of the cases, on a test server or a single-user machine, you may find more useful to install in your home dirtectory, for instance:

    easy_install -d ~/lib/python2.5/site-packages/ numpy
  • to upgrade, use

    easy_install -U numpy
  • you can browse the list of available packages.

  • for pip: http://pip.openplans.org/

  • you may create a script tu update all packages:

    for i in `python -c "for dist in __import__('pkg_resources').working_set: print dist.project_name"`:
    do
    echo "`easy_install -U $i`"
    echo "++++++++++++++++++++++++++++++++++++++++++++++++++"
    done
  • to install PIL, use

    easy_install -d lib/python2.6/site-packages/ --find-links http://www.pythonware.com/products/pil/ Imaging

SVNs: bleeding edge versions

  • numpy

    svn co http://svn.scipy.org/svn/numpy/trunk numpy
    cd numpy
    python setup.py build
    sudo python setup.py install
    rm -rf build
    cd ..
  • SciPy

    svn co http://svn.scipy.org/svn/scipy/trunk scipy
    cd scipy
    python setup.py build
    sudo python setup.py install
    rm -rf build
    cd ..
  • pylab

    svn co https://svn.sourceforge.net/svnroot/matplotlib/trunk/matplotlib matplotlib
    cd matplotlib
    python setup.py build
    sudo python setup.py install
    sudo rm -rf build
    cd ..
  • SPE

    svn checkout svn://svn.berlios.de/python/spe/trunk/_spe
  • PIL

    wget http://effbot.org/downloads/Imaging-1.1.6.tar.gz
    tar zxvf  Imaging-1.1.6.tar.gz
    cd Imaging-1.1.6
    python setup.py build_ext -i
    python selftest.py
    python setup.py install
  • gsl

    cvs -d :pserver:anoncvs@sources.redhat.com:/cvs/gsl login
    cvs -d :pserver:anoncvs@sources.redhat.com:/cvs/gsl checkout gsl
    cd gsl/
    ./autogen.sh
    ./configure --enable-maintainer-mode
    make
  • pytables

    • dependency on HDF

      wget ftp://ftp.hdfgroup.org/HDF5/current/src/hdf5-1.6.5.tar.gz
      tar zxvf hdf5-1.6.5.tar.gz
      cd hdf5-1.6.5
      ./configure --enable-cxx
      make
      make install
      h5ls -r  Documents/Sci/projets/virtualV1/experiments/benchmark_one/results/benchmark_retina_high.h5
      wget http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/Pyrex-0.9.5.1a.tar.gz
      tar zxvf Pyrex-0.9.5.1a.tar.gz
      cd Pyrex-0.9.5.1a
      python setup.py build
      sudo python setup.py install
      rm -rf build
    • install

      wget http://puzzle.dl.sourceforge.net/sourceforge/pytables/pytables-1.4.tar.gz
      #svn co http://pytables.org/svn/pytables/trunk/ pytables
      tar zxvf pytables-1.4.tar.gz
      cd pytables-1.4
      export DYLD_LIBRARY_PATH=/sw/lib # or in .bashrc
      python setup.py install --hdf5=/sw
      cd ..
  • pygtk

    wget http://ftp.gnome.org/pub/GNOME/sources/pygtk/2.8/pygtk-2.8.6.tar.bz2
    tar xvfj pygtk-2.8.6.tar.bz2
    cd pygtk-2.8.6
    .configure
    make
    sudo make install    # or without sudo as root
    cd ..

Master M2 Sciences

Neurosciences Computationelles : émergence dans des réseaux d'information

Travaux pratiques : modèle bayesien de détection du mouvement d'objets

  • Le 27/10/2010, de 14h00 à 17h00 dans la salle formation de l'INCM (bâtiment N du GLM, 31, chemin Joseph Aiguier 13402 Marseille cedex).

  • me prévenir si vous n'avez pas un ordinateur portable personnel!

principe du TP

  • But: définition de la probabilité de vraissemblance pour des translations d'images grâce à l'utilisation d'un script générique

  • Méthode: montage d'expérience:

    1. expérience simple

    2. expérience avec vitesse entre 0 et une période

    3. effet du contraste sur le wagon wheel illusion

  • Résultats:

    1. une figure montrant 2 images successives d'un film, la densité de probabilité de mouvement. les images seront: un point, une ligne (réseau), un barber-pole, une image naturelle.

    2. une figure montrant l'influence d'un bruit ajouté sur ces images

    3. une figure montrant l'influence de la vitesse sur le mouvement d'un réseau

pré-requis

expérience simple

  • pour charger la boîte à outils :

    import motion_plant as mp
  • pour visualiser une image

    I1, I2 = mp.generate(V_X = 3.5)
    mp.show_images(I1,I2)
  • Calcule la proba sur les vitesses V pour les 2 images:

    1. Il faut définir V avec la fonction velocity_grid, par exemple:

      V= mp.velocity_grid(v_max = 5.)

      (v_max est important il donne la valeur max des proba à tester)

    2. puis invoquer:

      P= mp.proba(V, I1, I2)
  • on peut voir la proba avec

    mp.show_proba(V, P)
  • et trouver celle qui correspond au maximum:

    mp.ArgMaxProba(V, P)

    ou la moyenne

    mp.MeanProba(V, P)
  • essayez différentes images...

    1. image naturelle

      I1, I2 = mp.lena()
    2. réseau carré

      I1, I2 = mp.generate(V_X = 2.5, square=True)
  • et différents paramètres, comme la variance de la vraissemblance

    sigmas = np.logspace(-2, 0, 10)
    
    for sigma in sigmas:
        I1, I2 = mp.generate(V_X = 2.5)
        P= mp.proba(V, I1, I2, sigma=sigma)
        mp.show_proba(V, P)
        pylab.title('sigma ' + str(sigma))
        print mp.ArgMaxProba(V, P), mp.MeanProba(V, P)
  • ou la variance du prior

    sigmas = np.logspace(-1, 1, 10)
    
    for sigma_p in sigmas:
        I1, I2 = mp.generate(V_X = 2.5)
        P= mp.proba(V, I1, I2, sigma_p=sigma_p)
        mp.show_proba(V, P)
        pylab.title('sigma_p ' + str(sigma_p))
        print mp.ArgMaxProba(V, P), mp.MeanProba(V, P)

expérience avec différents niveaux de bruit

  • pour créer une liste de bruits à tester, utiliser

    noises = np.linspace(0, 2., 10)
    
    for noise in noises:
        I1, I2 = mp.generate(V_X = 2.5, noise=noise)
        mp.show_images(I1,I2)
        P= mp.proba(V, I1, I2, sigma_p=1.)
        print mp.ArgMaxProba(V, P), mp.MeanProba(V, P)
  • à comparer avec le cas où on est plus conservateur:

    pylab.close('all')
    N_contrast =10
    contrasts = np.linspace(0, 1., N_contrast)
    V_hat = np.zeros((N_contrast,2))
    for i, contrast in enumerate(contrasts):
        I1, I2 = mp.generate(V_X = 2.5, contrast=contrast, noise=.2,)
        P= mp.proba(V, I1, I2, sigma_p=10.)
        V_hat[i,:] = mp.MeanProba(V, P)
    
    pylab.plot(contrasts, V_hat[:,0], 'r')
    pylab.plot(contrasts, V_hat[:,1], 'r--')
    
    V_hat = np.zeros((N_contrast,2))
    for i, contrast in enumerate(contrasts):
        I1, I2 = mp.generate(V_X = 2.5, contrast=contrast, noise=.2,)
        P= mp.proba(V, I1, I2, sigma_p=1.)
        V_hat[i,:] = mp.MeanProba(V, P)
    
    pylab.plot(contrasts, V_hat[:,0], 'g')
    pylab.plot(contrasts, V_hat[:,1], 'g--')
  • ou avec une image naturelle:

    pylab.close('all')
    N_contrast =10
    contrasts = np.linspace(0, 1., N_contrast)
    V_hat = np.zeros((N_contrast,2))
    for i, contrast in enumerate(contrasts):
        I1, I2 = mp.lena()
        P= mp.proba(V, I1, I2, sigma_p=10.)
        V_hat[i,:] = mp.MeanProba(V, P)
    
    pylab.plot(contrasts, V_hat[:,0], 'b')
    pylab.plot(contrasts, V_hat[:,1], 'b--')

expérience avec un réseau à différentes vitesses

  • pour créer une liste de vitesses à tester, utiliser

    speeds = np.linspace(0, 10., 10)
    V_hat = np.zeros((10,2))
    for i, V_X in enumerate(speeds):
        I1, I2 = mp.generate(V_X = V_X, frequence=12)
        P= mp.proba(V, I1, I2, sigma_p=1.)
        V_hat[i,:] = mp.ArgMaxProba(V, P)
    
    pylab.plot(speeds, V_hat[:,0], 'g')
    pylab.plot(speeds, V_hat[:,1], 'g--')
  • ... c'est le Wagon-wheel effect!

références

nous

  • http://en.wikipedia.org/wiki/Nous

  • Nous (pronounced /ˈnuːs/, Greek: νοῦς or νόος) is a philosophical term for mind or intellect. Outside of a philosophical context, it is used, in coloquial English, to denote "common sense," with a different pronunciation (/naʊs/), and sometimes a different spelling (nouse or nowse).

Émergence

http://upload.wikimedia.org/wikipedia/commons/2/2d/Automate_cellulaire_hexagonal.png

  • dossier de La recherche de février 2007: article du philosophe Michel Bitbol + entretien de Robert Laughlin + article d'exemeples + frise historique

  • article très compréhensif de Bitbol:

    1. La nature est-elle un puits sans fond?

    2. l'émergence s'oppose au réductionnisme atomiste

    3. répond à l'argument atomiste: "si l'ensemble est plus que la somme des parties, alors ces parties seront les nouveaux atomes": "la propriété émergente est autonome des parties qui la compose" (elle sont multi-réalisables ou survenantes. On observe les même organisations à différentres échelles de la physique: cristaux, cerveau, internet.

    4. d'où une nécessité de montrer la robustesse de la loi émergente par rapport à des perturbations du substrat. principe de protection.

    5. pré-existence du phénomène par rapport à la loi, qui n'est là que pour décrire (au mieux) comme pour la http://fr.wikipedia.org/wiki/Renormalisation

    6. met dans le contexte politique de la science physique (atomistes nucléaires vs. emergentistes de la matière)

    7. "L'option du "sans fond" à un avantage. En elle, la question de la base ne se pose plus en termes d'existence mais en termes de méthode." Il est dur de se déplacer du centre du monde où l'homme s'est placé. L'intelligence n'est qu'un de ces propriétés émergentes, n'est plus "l'œil divin" au centre de la ronde des éléments.

http://i.ytimg.com/vi/HuwXJlPvkhc/0.jpg

  1. article de Laughlin, moins de contenu.

  2. exemples

    1. comportements collectifs

    2. L'Équation de Langevin (1908) est une équation stochastique pour le mouvement brownien.

    3. réseaux small world

  3. frise historique

    1. anaxagore contre l'idée du vide

    2. Leibnitz: charque monade (partie) reflète le tout

    3. G. Bruno = univers infini. brulé par l'eglise (peut-être permet de mettre en lumière le sermon récent du pape sur les athées)

    4. Lewes: origine du mot émergence

    5. RM Hare: Survenance = une propriété survient sur une autre si des variations pour lea première impliquent des différences dans la seconde, mais pas nécessairement l'inverse (une définition de la causalité?)

  4. notes:

    1. on parle toujours d'objet vs des actions sur ces objets: n'est-ce pas une vision anthropomorphique de l'homo faber?

    2. et le "contrat positiviste" ? http://fr.wikipedia.org/wiki/Positivisme « La philosophie positive est l’ensemble du savoir humain, disposé suivant un certain ordre qui permet d’en saisir les connexions et l’unité et d’en tirer les directions générales pour chaque partie comme pour le tout. Elle se distingue de la philosophie théologique et de la philosophie métaphysique en ce qu’elle est d’une même nature que les sciences dont elle procède, tandis que la théologie et la métaphysique sont d’une autre nature et ne peuvent ni guider les sciences ni en être guidées ; les sciences, la théologie et la métaphysique n’ont point entre elles de nature commune. Cette nature commune n’existe qu’entre la philosophie positive et les sciences. Mais comment définirons-nous le savoir humain ? Nous le définirons l’étude des forces qui appartiennent à la matière, et des conditions ou lois qui régissent ces forces. Nous ne connaissons que la matière et ses forces ou propriétés ; nous ne connaissons ni matière sans propriétés ou propriétés sans matière. Quand nous avons découvert un fait général dans quelque une de ces forces ou propriétés, nous disons que nous sommes en possession d’une loi, et cette loi devient aussitôt pour nous une puissance mentale et une puissance matérielle ; une puissance mentale, car elle se transforme dans l’esprit en instrument de logique ; une puissance matérielle, car elle se transforme dans nos mains en moyens de diriger les forces naturelles. » — Emile Littré, Auguste Comte et la philosophie positive

bundling using py2app

Warning

This post is certainly obsolete...

using macports

  • install py2app :

    sudo port install -u  py26-py2app
  • there's sometimes a problem in py2app to check the right architecture to build on:

    find /opt/local -name apptemplate/setup.py
    sudo vim /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/py2app/apptemplate/setup.py
  • in this case, this can be done by adding the following lines to py2app/apptemplate/setup.py:

    gPreBuildVariants = [
        ...
        {
            'name': 'main-x86_64',
            'target': '10.6',
            'cflags': '-isysroot /Developer/SDKs/MacOSX10.6.sdk -arch x86_64',
            'cc': 'gcc-4.2',
        },
        {
            'name': 'main-i386',
            'target': '10.6',
            'cflags': '-isysroot / -arch i386',
            'cc': 'gcc-4.2',
        },
        ...
    ]

    . So, change to

    gPreBuildVariants = [
        {
            'name': 'main-x86_64',
            'target': '10.5',
            'cflags': '-isysroot /Developer/SDKs/MacOSX10.5.sdk -arch x86_64',
            'cc': 'gcc-4.2',
         },
    #     {
    #         'name': 'main-universal',
    #         'target': '10.5',
    #         'cflags': '-isysroot /Developer/SDKs/MacOSX10.5.sdk -arch i386 -arch ppc -arch ppc64 -arch x86_64',
    #         'cc': 'gcc-4.2',
    #     },
    #     {
    #         'name': 'main-fat3',
    #         'target': '10.5',
    #         'cflags': '-isysroot / -arch i386 -arch ppc -arch x86_64',
    #         'cc': 'gcc-4.2',
    #     },
    #     {
    #         'name': 'main-intel',
    #         'target': '10.5',
    #         'cflags': '-isysroot / -arch i386 -arch x86_64',
    #         'cc': 'gcc-4.2',
    #     },
    #     {
    #         'name': 'main-fat',
    #         'target': '10.3',
    #         'cflags': '-isysroot /Developer/SDKs/MacOSX10.4u.sdk -arch i386 -arch ppc',
    #         'cc': 'gcc-4.0',
    #     },
    ]

using homebrew

A neurocentric approach to Bayesian inference

  • one-page paper arguing that Friston's free-energy view may not be complete. Some points made are

    1. the inversion operated assumes a generative model

    2. the use of surprise is defined using a frequentist approach not informational

      • one idea : from the frequentist measure one one can derive a conditional probability (a Xhi-2 distribution) of the probability. Not very far to the idea of Sahani & Dayan of a double probabilistic distribution

    3. explore surprise or avoid it: Fiorillo makes here a confusion of time scales. On the long term (learning) one tends to avoid surprise, on the short term (coding) this implies one jumps one surprise.

    4. points to his PLoS one paper: Fiorillo, C. D. Towards a general theory of neural computation based on prediction by single neurons. PLoS ONE 3, e3298 (2008)

  • once again, people love bipolarity: frequentists against probabilists, top-down vs. bottom-up, neurocentric vs global. neurons, areas, brains, groups of brains just don't care and evolve. it is our description that can be multiple. does a single one ("unified theory") exists iun today's language? at least I am convinced that (over generations) neurons adapt to behavior not the inverse, thus that if one has to seek for an information measure, it is certainly not in a ion's channel dynamic only.

  • the answer of Friston goes into that direction / correctly defines surprise / nice figure showing how one can learn "to be a Lorenz attractor" (certainly assuming a generative model of dynamics)

  • the open question is rather "how is the free-energy principle encoded in the brain's architecture and dynamics?"

reference

  • Christopher D. Fiorillo. A neurocentric approach to Bayesian inference, URL . Nature Reviews Neuroscience, 11(8):605, 2010

    A primary function of the brain is to infer the state of the world in order to determine which motor behaviours will best promote adaptive fitness. Bayesian probability theory formally describes how rational inferences ought to be made, and it has been used with great success in recent years to explain a range of perceptual and sensorimotor phenomena1, 2, 3, 4, 5. .

  • Karl Friston. Is the free-energy principle neurocentric?, URL . Nature Reviews Neuroscience, 11(8):605, 2010

    Recently, a free-energy formulation of brain function was reviewed in relation to several other neurobiological theories (The free-energy principle: a unified brain theory? Nature Rev. Neurosci. 11, 127–138 (2010) .

managing packages on MacOsX : testing HomeBrew

  • install

     $ ruby -e "$(curl -fsS http://gist.github.com/raw/323731/install_homebrew.rb)"
    ==> This script will install:
    /usr/local/bin/brew
    /usr/local/Library/Formula/...
    /usr/local/Library/Homebrew/...
    
    Press enter to continue
    ==> Downloading and Installing Homebrew...
    ==> Installation successful!
  • fix permissions

    $ sudo chown -R `whoami` /usr/local
  • to install python specific stuff, use pip:

    brew install pip
    echo '[install]
    install-scripts=/usr/local/Cellar/PyPi/2.6/bin
    install-data=/usr/local/Cellar/PyPi/2.6/share' > ~/.pydistutils.cfg
    pip install ipython
  • this is with the exception of numpy + scipy, the latter needing

    cd tmp
    svn co http://svn.scipy.org/svn/numpy/trunk numpy
    pip install numpy
    
    brew install suite-sparse
    svn co http://svn.scipy.org/svn/scipy/trunk scipy
    pip install scipy

Terminal.app shortcuts

  • from http://superuser.com/questions/52483/terminal-tips-and-tricks-for-mac-os-x

    To make Ctrl← and Ctrl→  useful again, that is going a word forward or backward like they usually do on Linux, you must make Terminal.app send the right string to the shell. In the preferences, go to the Settings tab and select your default profile. Go to Keyboard and set control cursor left and control cursor right to send string \033b and \033f respectively.
    
    While your're at it, you can also fix Home (\033[H), End (\033[F), Page Up (\033[5~) and Page Down (\033[6~) so that they send those keys to the shell instead of scrolling the buffer.

System Updates

  • install a package

    sudo installer -pkg <chemin d’accès au paquet> -target /
  • CLI for Software Updates :

    sudo softwareupdate -i -a

getting the PID from matlab

  • I need the pid to know if one of the many simulations I run are still running. There's no native solution in matlab to my knowledge.

Warning

This post is certainly obsolete...

running embarassingly parallel simulations on a multicore machine using bash loops

  • I need to run a single-processor experiment on some parameters, say N times

  • embarassingly parallel: python experiment_all.py scans all these parameters:

    1 for i in range(N):
    2     if experiment[i] is not finished and not locked:
    3         lock(experiment[i])
    4         run(experiment[i])
  • to run this on 8 cores, bash is your friend (may also apply to *sh where * is either z, c, tc, ...)

    for i in {1..8}; do cd /data/work/ && python experiment_all.py  & done
  • however, runnning them simultaneously may cause problems if the locking mechanism is not fast enough, so I introduce a random jitter

    for i in {1..8}; do cd /data/work/ && sleep 0.$(( RANDOM%1000 )) ; python experiment_all.py  & done

distributed computing

  • guess you have a bunch (4000) of embarrassingly parallel tasks (one hour each) and access to about 40 CPUs through SSH. All tasks would run easily on each node, and they all share some network drive (NFS). Would be nice to run everything from just one place (script, command-line, web interface, ...)

a bunch of existing tools

what we can do

Pinna illusion

  • from http://www.scholarpedia.org/article/Pinna_illusion :

    Pinna illusion is the first visual illusion showing a rotating motion effect. In Figure 1  the squares, delineated by two white and two black edges each, are grouped by proximity in two concentric rings. All the squares have the same width, length, and orientation in relation to the center of their circular arrangements. The two rings differ only in the relative position of their narrow black and white edges forming the vertexes. More precisely, the two rings show reversal of the vertex orientation and, consequently, opposite inclination of the virtual or implicit diagonal orientation polarity obtained by joining the two vertexes where black and white lines meet (Pinna, 1990; Pinna & Brelstaff, 2000).
  • related to the aperture problem

    The Pinna illusion and the related effects represent an opportunity within the context of vision science and cognitive neuroscience  (Gazzaniga, 2004; Purves & Lotto, 2003). If the task of a sensory system is to provide a faithful representation of biologically relevant events in the external world, the previous phenomena show that visual perception  contrives, through complex neural computations, to create informative and efficient representations of the external environment. These representations are at the same time simpler and richer than the raw signals transduced by receptors. They are simpler because they simplify the enormous quantity of raw measurement information submitted to the central nervous system (see Section 2). They are richer because they contain properties of events and objects abstracted from the primitive sensory signals (see Sections 3 and 4). Therefore, the first opportunity suggested by the previous effects concerns the basic encoding of the features of the stimuli, i.e. the nature and meanings of the signals carried by single neurons, the maps and areas where they operate (see Section 2) and the pattern of motion of objects, surfaces, and edges in a visual scene due to the relative motion between an observer and the scene (optical flow, Gibson, 1979). Furthermore, they are good tests to understand the perceptual context within which a specific element is perceived, namely “what is ‘figure and what is ‘background”, “how separated elements of a visual event are combined and organized in a sensory representation” (see Section 4).
  • windmill illusion. link to the waghon-wheel illusion?

compiling OpenCV on MacOSX 10.6

using macports

  • it works now with macports:

    sudo port install -u opencv +python26 +tbb

Warning

This post is certainly obsolete...

latest SVN

  • compiling here along with MacTex...

  • from http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port

    svn co https://code.ros.org/svn/opencv/trunk/opencv
    cd opencv # the directory containing INSTALL, CMakeLists.txt etc.
    mkdir build
    cd build
    cmake -D CMAKE_OSX_ARCHITECTURES=x86_64 -D WITH_FFMPEG=ON -D BUILD_EXAMPLES=ON -D BUILD_LATEX_DOCS=ON -D PDFLATEX_COMPILER=/usr/texbin/pdflatex -D BUILD_NEW_PYTHON_SUPPORT=ON  -D PYTHON_LIBRARY=/opt/local/lib/libpython2.6.dylib -D PYTHON_INCLUDE_DIR=/opt/local/Library/Frameworks/Python.framework/Headers ..
    make -j4
    sudo make install
  • I had to rebuild some ports

    sudo port install ilmbase
    port provides /opt/local/lib/libIlmImf.dylib
    sudo port install openexr
    sudo port install libdc1394

    and recompile

  • then could run

    cd ../samples/python/
    python camera.py

using homebrew

  • another route is homebrew: http://gist.github.com/519418 / :

    $ brew info opencv
    opencv 2.1.1-pre
    http://opencv.willowgarage.com/wiki/
    Depends on: cmake, pkg-config, libtiff, jasper, tbb
    /usr/local/Cellar/opencv/2.1.1-pre (96 files, 37M)
    
    The OpenCV Python module will not work until you edit your PYTHONPATH like so:
      export PYTHONPATH="/usr/local/lib/python2.6/site-packages/:$PYTHONPATH"
    
    To make this permanent, put it in your shell's profile (e.g. ~/.profile).
    
    http://github.com/mxcl/homebrew/commits/master/Library/Formula/opencv.rb

latex within moinmoin

  • installation d'après http://johannes.sipsolutions.net/Projects/new-moinmoin-latex

  • pour s'adapter à ma distribution pdflatex, j'ai changé

    1 # last arg must have %s in it!
    2 latex_args = ("--interaction=nonstopmode -output-format dvi", "%s.tex")

    dans le parser sudo open -e  ~/WebSites/moin/data/plugin/parser/latex.py)

Warning

This post is certainly obsolete...

examples

This is a red square:

\usepackage{graphics,color}

%%end-prologue%%
\newsavebox{\mysquare}
\savebox{\mysquare}{\textcolor{red}{\rule{1in}{1in} } }
\usebox{\mysquare}

symboles

% Math-mode symbol & verbatim
\def\W#1#2{$#1{#2}$ &\tt\string#1\string{#2\string}}
\def\X#1{$#1$ &\tt\string#1}
\def\Y#1{$\big#1$ &\tt\string#1}
\def\Z#1{\tt\string#1}

% A non-floating table environment.
\makeatletter
\renewenvironment{table}%
   {\vskip\intextsep\parskip\z@
    \vbox\bgroup\centering\def\@captype{table}}%
   {\egroup\vskip\intextsep}
\makeatother

% All the tables are \label'ed in case this document ever gets some
% explanatory text written, however there are no \refs as yet. To save
% LaTeX-ing the file twice we go:
\renewcommand{\label}[1]{}

%%end-prologue%%
\begin{table}
\begin{tabular}{*8l}
\X\alpha        &\X\theta       &\X o           &\X\tau         \\
\X\beta         &\X\vartheta    &\X\pi          &\X\upsilon     \\
\X\gamma        &\X\gamma       &\X\varpi       &\X\phi         \\
\X\delta        &\X\kappa       &\X\rho         &\X\varphi      \\
\X\epsilon      &\X\lambda      &\X\varrho      &\X\chi         \\
\X\varepsilon   &\X\mu          &\X\sigma       &\X\psi         \\
\X\zeta         &\X\nu          &\X\varsigma    &\X\omega       \\
\X\eta          &\X\xi                                          \\
                                                                \\
\X\Gamma        &\X\Lambda      &\X\Sigma       &\X\Psi         \\
\X\Delta        &\X\Xi          &\X\Upsilon     &\X\Omega       \\
\X\Theta        &\X\Pi          &\X\Phi
\end{tabular}
\caption{Greek Letters}\label{greek}
\end{table}

ou

\begin{equation}
x^3 =\int_{0}^{\infty} f(x,y) dy
\end{equation}
  • et encore

    $$x^3 =\int_{0}^{\infty} f(x,y) dy + c$$

inline

Because people requested an easier way to enter latex, I've added the possibility to write $ ... $ to obtain inline formulas. This is equivalent to writing \$ ...\$ and has the same single-line limitation (but everything else isn't really useful in formulas anyway). In order to do this, install the inline\_latex.py parser add #format inline\_latex to your page (alternatively, configure the default parser to be ``inline\_latex). This parser accepts all regular wiki syntax, but additionally the $ ... $' syntax. Additionally, the ``inline_latex` formatter supports $$....$$ style formulas (still limited to a single line though!) which puts the formula into a paragraph on its own.

Note: in the nikola blog, this is directly accomplished by using ReST : \$\\lambda\$ = $lambda$

installing SUMATRA

installing SUMATRA

Warning

This post is certainly obsolete...

dependencies

  • pysvn :

    • had to uninstall stuff from MacPorts

      sudo port uninstall --follow-dependents subversion
    • get pysvn

      • make :

        cd Source
        python setup.py backport
        Create the Makefile using python setup.py configure
        make
    • install

      sudo rsync -av pysvn /Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/
      • pysvn 1.7.1 worked for me

  • mercurial

    sudo easy_install mercurial
  • django

    sudo easy_install django django_tagging

with hg

525  svn export ../sci/dyva/Motion/particles hg_particles
526  cd hg_particles/
527  hg init
528  hg add MotionParticles.py experiment_all.py
529  hg commit
530  hg commit -m 'test'
531  echo $USER
532  vim .hgrc
533  vim ~/.hgrc
534  hg commit -m 'my first HG commit'
535  vim ~/.hgrc
536  ipython
537  ls
538  smt init sumatraTest_hg
539  smt info

with svn

 501  cd sci/dyva/Motion/particles/
 502  smt init -h
 503  smt init sumatraTest
 504  smt info
511  smt configure --simulator=python --main=experiment_all.py
 512  smt info
 513  smtweb &
 514  ls -a
 515  rm -fr .smt
 516  smt init sumatraTest
 517  smtweb &
 518  open experiment_all.py
 519  touch fake.param
 520  smt run
 521  smt run -s python -m experiment_dot.py fake.param
 522  smt info
 523  smt configure -h
 524  smt configure -c diff
 525  smt info
 526  smt run -s python -m experiment_dot.py fake.param
 529  smt run -s python -m experiment_dot.py fake.param
 534  rm mat/dot.npy
 535  python experiment_dot.py fake.param
 536  ls
 537  smt help configure
 538  smt configure -d ./figures/
 539  smt info
 540  smt configure -s python -m experiment_dot.py
 541  smt run fake.param
 542  rm mat/dot.npy
 543  smt run fake.param
 544  ls figures/
 545  rm figures/dot_*
 546  smt run fake.param
 547  smt info
 548  smt configure -d ./figures
 549  smt info
 550  rm figures/dot_*png
 551  smt configure -d ./figures
 552  smt run fake.param
 553  smt comment "apparently, it is worth NaN shekels."
 554  smt tag codejam
 558  rm figures/dot_*png
 559  rm mat/dot.npy
 560  smt run --reason="test effect of a bigger dot" fake.param dot_size=0.1
 561  ls
 562  ls -al .smt/
 563  less .smt/simulation_records
 564  sqlite3 .smt/simulation_records

NeuroCompMarseille 2010 Workshop

Computational Neuroscience: From Representations to Behavior

Second NeuroComp Marseille Workshop

Date

27-28 May 2010

Location

Amphithéâtre Charve at the Saint-Charles' University campus - Métro : Line 1 et 2 (St Charles), a 5 minute walk from the railway station. Map (Amphithéâtre Charve, University Main Entrance, etc.) Metro, Bus and Tramway Getting to Marseille from Airport

Registration

Registration was free but mandatory, participation limited to 80 persons.

Computational Neuroscience emerges now as a major breakthrough in exploring cognitive functions. It brings together theoretical tools that elucidate fundamental mechanisms responsible for experimentally observed behaviour in the applied neurosciences. This is the second Computational Neuroscience Workshop organized by the "NeuroComp Marseille" network.

It will focus on latest advances on the understanding of how information may be represented in neural activity (1st day) and on computational models of learning, decision-making and motor control (2nd day). The workshop will bring together leading researchers in these areas of theoretical neuroscience. The meeting will consist of invited speakers with sufficient time to discuss and share ideas and data. All conferences will be in English.

  • 27 May 2010 Neural representations for sensory information & the structure-function relation

In this talk, I will review recent works on the sparse representations of natural images. I will in particular focus on both the application of these emerging models to image processing problems, and their potential implication for the modeling of visual processing. Natural images exhibit a wide range of geometric regularities, such as curvilinear edges and oscillating textures. Adaptive image representations select bases from a dictionary of orthogonal or redundant frames that are parameterized by the geometry of the image. If the geometry is well estimated, the image is sparsely represented by only a few atoms in this dictionary. On an ingeniering level, these methods can be used to enhance the resolution of super-resolution inverse problems, and can also be used to perform texture synthesis. On a biological level, these mathematical representations share similarities with low level grouping processes that operate in areas V1 and V2 of the visual brain. We believe both processing and biological application of geometrical methods work hand in hand to design and analyze new cortical imaging methods.

  • 11h00-12h00 Jean Petitot Centre d'Analyse et de Mathématique Sociales, Ecole des Hautes Etudes en Sciences Sociales - Paris «Neurogeometry of visual perception»*

In relation with experimental data, we propose a geometric model of the functional architecture of the primary visual cortex (V1) explaining contour integration. The aim is to better understand the type of geometry algorithms implemented by this functional architecture. The contact structure of the 1-jet space of the curves in the plane, with its generalization to the roto-translation group, symplectifications, and sub-Riemannian geometry, are all neurophysiologically realized by long-range horizontal connections. Virtual structures, such as illusory contours of the Kanizsa type, can then be explained by this model.

  • 14h00-14h45 Peggy Series Institute for Adaptive and Neural Computation, Edinburgh «Bayesian Priors in Perception and Decision Making»

We'll present two recent projects:

The first project (with M. Chalk and A. R. Seitz) is an experimental investigation of the influence of expectations on the perception of simple stimuli. Using a simple task involving estimation and detection of motion random dots displays, we examined whether expectations can be developed quickly and implicitly and how they affect perception. We find that expectations lead to attractive biases such that stimuli appear as being more similar to the expected one than they really are, as well as visual hallucinations in the absence of a stimulus. We discuss our findings in terms of Bayesian Inference.

In the second project (with A. Kalra and Q. Huys), we explore the concepts of optimism and pessimism in decision making. Optimism is usually assessed using questionnaires, such as the LOT-R. Here, using a very simple behavioral task, we show that optimism can be described in terms of a prior on expected future rewards. We examine the correlation between the shape of this prior for individual subjects and their scores on questionnaires, as well as with other measures of personality traits.

  • 14h45-15h45 Heiko Neumann (in collaboration with Florian Raudies) Inst. of Neural Information Processing, Ulm University Germany «Cortical mechanisms of transparent motion perception – a neural model»

Transparent motion is perceived when multiple motions different in directions and/or speeds are presented in the same part of visual space. In perceptual experiments the conditions have been studied under which motion transparency occurs. An upper limit in the number of perceived transparent layers has been investigated psychophysically. Attentional signals can improve the perception of a single motion amongst several motions. While criteria for the occurrence of transparent motion have been identified only few potential neural mechanisms have been discussed so far to explain the conditions and mechanisms for segregating multiple motions. A neurodynamical model is presented which builds upon a previously developed neural architecture emphasizing the role of feedforward cascade processing and feedback from higher to earlier stages for selective feature enhancement and tuning. Results of computational experiments are consistent with findings from physiology and psychophysics. Finally, the model is demonstrated to cope with realistic data from computer vision benchmark databases. Work supported by European Union (project SEARISE), BMBF, and CELEST

  • 16h00-17h00 Rudolf Friedrich Institute für Theoretische Physik Westfälische Wilhelms Universität Münster ** «Windows to Complexity: Disentangling Trends and Fluctuations inComplex Systems»**

In the present talk, we discuss how to perform an analysis of experimental data of complex systems by disentangling the effects of dynamical noise (fluctuations) and deterministic dynamics (trends). We report on results obtained for various complex systems like turbulent fields, the motion of dissipative solitons in nonequilibrium systems, traffic flows, and biological data like human tremor data and brain signals. Special emphasis is put on methods to predict the occurrence of qualitative changes in systems far from equilibrium. [1] R. Friedrich, J. Peinke, M. Reza Rahimi Tabar: Importance of Fluctuations: Complexity in the View of stochastic Processes (in: Springer Encyclopedia on Complexity and System Science, (2009))

  • 17h00-17h45 General Discussion

  • 28 May 2010 Computational models of learning and decision making

  • 9h30-10h00 Andrea Brovelli Institut de Neurosciences Cognitives de la Méditerranée, CNRS and Université de la Méditerranée - Marseille «An introduction to Motor Learning, Decision-Making and Motor Control»

  • 10h00-11h00 Emmanuel Daucé Mouvement & Perception, UMR 6152, Faculté des sciences du sport «Adapting the noise to the problem : a Policy-gradient approach of receptive fields formation»

In machine learning, Kernel methods are give a consistent framework for applying the perceptron algorithm to non-linear problems. In reinforcement learning, the analog of the perceptron delta-rule is called the "policy-gradient" approch proposed by Williams in 1992 in the framework of stochastic neural networks. Despite its generality and straighforward applicability to continuous command problems, quite few developments of the method have been proposed since. Here we present an account of the use of a kernel transformation of the perception space for learning a motor command, in the case of eye orientation and multi-joint arm control. We show that such transformation allows the system to learn non-linear transformation, like the log-like resolution of a foveated retina, or the transformation from a cartesian perception space to a log-polar command, by shaping appropriate receptive fields from the perception to the command space. We also present a method for using multivariate correlated noise for learning high-DOF control problems, and propose some interpretations on the putative role of correlated noise for learning in biological systems.

  • 11h00-12h00 Máté Lengyel Computational & Biological Learning Lab, Department of Engineering, University of Cambridge «Why remember? Episodic versus semantic memories for optimal decision making»

Memories are only useful inasmuch as they allow us to act adaptively in the world. Previous studies on the use of memories for decision making have almost exclusively focussed on implicit rather than declarative memories, and even when they did address declarative memories they dealt only with semantic but not episodic memories. In fact, from a purely computational point of view, it seems wasteful to have memories that are episodic in nature: why should it be better to act on the basis of the recollection of single happenings (episodic memory), rather than the seemingly normative use of accumulated statistics from multiple events (semantic memory)? Using the framework of reinforcement learning, and Markov decision processes in particular, we analyze in depth the performance of episodic versus semantic memory-based control in a sequential decision task under risk and uncertainty in a class of simple environments. We show that episodic control should be useful in a range of cases characterized by complexity and inferential noise, and most particularly at the very early stages of learning, long before habitization (the use of implicit memories) has set in. We interpret data on the transfer of control from the hippocampus to the striatum in the light of this hypothesis.

  • 14h00-15h00 Rafal Bogacz Department of Computer Science, University of Bristol «Optimal decision making and reinforcement learning in the cortico-basal-ganglia circuit»

During this talk I will present a computational model describing decision making process in the cortico-basal ganglia circuit. The model assumes that this circuit performs statistically optimal test that maximizes speed of decisions for any required accuracy. In the model, this circuit computes probabilities that considered alternatives are correct, according to Bayes’ theorem. This talk will show that the equation of Bayes’ theorem can be mapped onto the functional anatomy of a circuit involving the cortex, basal ganglia and thalamus. This theory provides many precise and counterintuitive experimental predictions, ranging from neurophysiology to behaviour. Some of these predictions have been already validated in existing data and others are a subject of ongoing experiments. During the talk I will also discuss the relationships between the above model and current theories of reinforcement learning in the cortico-basal-ganglia circuit.

  • 15h30-16h30 Emmanuel Guigon Institut des Systèmes Intelligents et de Robotique, UPMC - CNRS / UMR 7222 «Optimal feedback control as a principle for adaptive control of posture and movement»

  • 16h30-17h15 General Discussion

Haïm Cohen : Tu Ne Laisseras Point Pleurer

http://ecx.images-amazon.com/images/I/31Rx4loOyhL._SL500_AA300_.jpg

  • Présentation de l'éditeur (source: amazon)

    • Où puiser l'espoir d'un monde plus humain ? En comprenant la dimension humaine des pleurs de nos bébés et en y répondant encore et encore. A partir d'arguments psychologiques et neurobiologiques, Haïm Cohen nous expose son utopie susceptible d'élever la conscience morale de nos enfants ainsi immunisés contre l'extrême violence. Manuel d'humanisme autant que de réflexion portée sur notre société, ce livre s'adresse à tous les parents soucieux du bon développement psycho-affectif de leur enfant, mais aussi à tous les lecteurs intéressés par les progrès des neurosciences.

    • Biographie de l'auteur : Haïm Cohen est pédiatre à Paris.

  • utopie de base: importance de ne pas laisser un bébé pleurer, ce qui amènerait le bébé à accepter le manque et la violence entre individus. peut se baser sur notre évolution à l'échelle du million d'année, notre statut ancien de chasseur / cueilleur. les pleurs sont universels, un langage "phasique" primaire, primal

  • convergence de la "neuroanalyse" : psychanalyse + neuroscience ...

  • vers une émergence de l'éthique. l'individu n'a qu'un objectif d'épanouissement personnel. perception de l'altruisme, émergence de l'éthique depuis l'interaction de ces individualités.

reStructuredText rst cheatsheet

  • =====================================================
     The reStructuredText_ Cheat Sheet: Syntax Reminders
    =====================================================
    :Info: See <http://docutils.sf.net/rst.html> for introductory docs.
    :Author: David Goodger <goodger@python.org>
    :Date: $Date: 2006-01-23 02:13:55 +0100 (Mon, 23 Jän 2006) $
    :Revision: $Revision: 4321 $
    :Description: This is a "docinfo block", or bibliographic field list
    
    Section Structure
    =================
    Section titles are underlined or overlined & underlined.
    
    Body Elements
    =============
    Grid table:
    
    +--------------------------------+-----------------------------------+
    | Paragraphs are flush-left,     | Literal block, preceded by "::":: |
    | separated by blank lines.      |                                   |
    |                                |     Indented                      |
    |     Block quotes are indented. |                                   |
    +--------------------------------+ or::                              |
    | >>> print 'Doctest block'      |                                   |
    | Doctest block                  | > Quoted                          |
    +--------------------------------+-----------------------------------+
    | | Line blocks preserve line breaks & indents. [new in 0.3.6]       |
    | |     Useful for addresses, verse, and adornment-free lists; long  |
    |       lines can be wrapped with continuation lines.                |
    +--------------------------------------------------------------------+
    
    Simple tables:
    
    ================  ============================================================
    List Type         Examples
    ================  ============================================================
    Bullet list       * items begin with "-", "+", or "*"
    Enumerated list   1. items use any variation of "1.", "A)", and "(i)"
                      #. also auto-enumerated
    Definition list   Term is flush-left : optional classifier
                          Definition is indented, no blank line between
    Field list        :field name: field body
    Option list       -o  at least 2 spaces between option & description
    ================  ============================================================
    
    ================  ============================================================
    Explicit Markup   Examples (visible in the `text source <cheatsheet.txt>`_)
    ================  ============================================================
    Footnote          .. [1] Manually numbered or [#] auto-numbered
                         (even [#labelled]) or [*] auto-symbol
    Citation          .. [CIT2002] A citation.
    Hyperlink Target  .. _reStructuredText: http://docutils.sf.net/rst.html
                      .. _indirect target: reStructuredText_
                      .. _internal target:
    Anonymous Target  __ http://docutils.sf.net/docs/ref/rst/restructuredtext.html
    Directive ("::")  .. image:: images/biohazard.png
    Substitution Def  .. |substitution| replace:: like an inline directive
    Comment           .. is anything else
    Empty Comment     (".." on a line by itself, with blank lines before & after,
                      used to separate indentation contexts)
    ================  ============================================================
    
    Inline Markup
    =============
    *emphasis*; **strong emphasis**; `interpreted text`; `interpreted text
    with role`:emphasis:; ``inline literal text``; standalone hyperlink,
    http://docutils.sourceforge.net; named reference, reStructuredText_;
    `anonymous reference`__; footnote reference, [1]_; citation reference,
    [CIT2002]_; |substitution|; _`inline internal target`.
    
    
    Directive Quick Reference
    =========================
    See <http://docutils.sf.net/docs/ref/rst/directives.html> for full info.
    
    ================  ============================================================
    Directive Name    Description (Docutils version added to, in [brackets])
    ================  ============================================================
    attention         Specific admonition; also "caution", "danger",
                      "error", "hint", "important", "note", "tip", "warning"
    admonition        Generic titled admonition: ``.. admonition:: By The Way``
    image             ``.. image:: picture.png``; many options possible
    figure            Like "image", but with optional caption and legend
    topic             ``.. topic:: Title``; like a mini section
    sidebar           ``.. sidebar:: Title``; like a mini parallel document
    parsed-literal    A literal block with parsed inline markup
    rubric            ``.. rubric:: Informal Heading``
    epigraph          Block quote with class="epigraph"
    highlights        Block quote with class="highlights"
    pull-quote        Block quote with class="pull-quote"
    compound          Compound paragraphs [0.3.6]
    container         Generic block-level container element [0.3.10]
    table             Create a titled table [0.3.1]
    list-table        Create a table from a uniform two-level bullet list [0.3.8]
    csv-table         Create a table from CSV data (requires Python 2.3+) [0.3.4]
    contents          Generate a table of contents
    sectnum           Automatically number sections, subsections, etc.
    header, footer    Create document decorations [0.3.8]
    target-notes      Create an explicit footnote for each external target
    meta              HTML-specific metadata
    include           Read an external reST file as if it were inline
    raw               Non-reST data passed untouched to the Writer
    replace           Replacement text for substitution definitions
    unicode           Unicode character code conversion for substitution defs
    date              Generates today's date; for substitution defs
    class             Set a "class" attribute on the next element
    role              Create a custom interpreted text role [0.3.2]
    default-role      Set the default interpreted text role [0.3.10]
    title             Set the metadata document title [0.3.10]
    ================  ============================================================
    
    Interpreted Text Role Quick Reference
    =====================================
    See <http://docutils.sf.net/docs/ref/rst/roles.html> for full info.
    
    ================  ============================================================
    Role Name         Description
    ================  ============================================================
    emphasis          Equivalent to *emphasis*
    literal           Equivalent to ``literal`` but processes backslash escapes
    PEP               Reference to a numbered Python Enhancement Proposal
    RFC               Reference to a numbered Internet Request For Comments
    raw               For non-reST data; cannot be used directly (see docs) [0.3.6]
    strong            Equivalent to **strong**
    sub               Subscript
    sup               Superscript
    title             Title reference (book, etc.); standard default role
    ================  ============================================================
  • results in

The reStructuredText Cheat Sheet: Syntax Reminders

Info

See <http://docutils.sf.net/rst.html> for introductory docs.

Author

David Goodger <goodger@python.org>

Date

$Date: 2006-01-23 02:13:55 +0100 (Mon, 23 Jän 2006) $

Revision

$Revision: 4321 $

Description

This is a "docinfo block", or bibliographic field list

Section Structure

Section titles are underlined or overlined & underlined.

Body Elements

Grid table:

Paragraphs are flush-left, separated by blank lines.

Block quotes are indented.

Literal block, preceded by "::":

Indented

or:

> Quoted
>>> print 'Doctest block'
Doctest block
Line blocks preserve line breaks & indents. [new in 0.3.6]
Useful for addresses, verse, and adornment-free lists; long lines can be wrapped with continuation lines.

Simple tables:

List Type

Examples

Bullet list

  • items begin with "-", "+", or "*"

Enumerated list

  1. items use any variation of "1.", "A)", and "(i)"

  2. also auto-enumerated

Definition list

Term is flush-leftoptional classifier

Definition is indented, no blank line between

Field list

field name

field body

Option list

-o

at least 2 spaces between option & description

Explicit Markup

Examples (visible in the text source)

Footnote

1

Manually numbered or [#] auto-numbered (even [#labelled]) or [*] auto-symbol

Citation

CIT2002

A citation.

Hyperlink Target

Anonymous Target

Directive ("::")

images/biohazard.png

Substitution Def

Comment

Empty Comment

(".." on a line by itself, with blank lines before & after, used to separate indentation contexts)

Inline Markup

emphasis; strong emphasis; interpreted text; interpreted text with role; inline literal text; standalone hyperlink, http://docutils.sourceforge.net; named reference, reStructuredText; anonymous reference; footnote reference, 1; citation reference, [CIT2002]; like an inline directive; inline internal target.

Directive Quick Reference

See <http://docutils.sf.net/docs/ref/rst/directives.html> for full info.

Directive Name

Description (Docutils version added to, in [brackets])

attention

Specific admonition; also "caution", "danger", "error", "hint", "important", "note", "tip", "warning"

admonition

Generic titled admonition: .. admonition:: By The Way

image

.. image:: picture.png; many options possible

figure

Like "image", but with optional caption and legend

topic

.. topic:: Title; like a mini section

sidebar

.. sidebar:: Title; like a mini parallel document

parsed-literal

A literal block with parsed inline markup

rubric

.. rubric:: Informal Heading

epigraph

Block quote with class="epigraph"

highlights

Block quote with class="highlights"

pull-quote

Block quote with class="pull-quote"

compound

Compound paragraphs [0.3.6]

container

Generic block-level container element [0.3.10]

table

Create a titled table [0.3.1]

list-table

Create a table from a uniform two-level bullet list [0.3.8]

csv-table

Create a table from CSV data (requires Python 2.3+) [0.3.4]

contents

Generate a table of contents

sectnum

Automatically number sections, subsections, etc.

header, footer

Create document decorations [0.3.8]

target-notes

Create an explicit footnote for each external target

meta

HTML-specific metadata

include

Read an external reST file as if it were inline

raw

Non-reST data passed untouched to the Writer

replace

Replacement text for substitution definitions

unicode

Unicode character code conversion for substitution defs

date

Generates today's date; for substitution defs

class

Set a "class" attribute on the next element

role

Create a custom interpreted text role [0.3.2]

default-role

Set the default interpreted text role [0.3.10]

title

Set the metadata document title [0.3.10]

Interpreted Text Role Quick Reference

See <http://docutils.sf.net/docs/ref/rst/roles.html> for full info.

Role Name

Description

emphasis

Equivalent to emphasis

literal

Equivalent to literal but processes backslash escapes

PEP

Reference to a numbered Python Enhancement Proposal

RFC

Reference to a numbered Internet Request For Comments

raw

For non-reST data; cannot be used directly (see docs) [0.3.6]

strong

Equivalent to strong

sub

Subscript

sup

Superscript

title

Title reference (book, etc.); standard default role

replacing text in files

using sed

  • The UNIX command sed is useful to find and replace text in single or multiple files. This page lists some common commands in using sed to improve editing code.

  • To replace foo with foo_bar in a single file:

    sed -i 's/foo/foo_bar/g' my_script.py
    • -i = edit the file "in-place": sed will directly modify the file if it finds anything to replace

    • s = substitute the following text

    • foo = the text string to be substituted

    • foo_bar = the replacement string

    • g = global, match all occurrences in the line

  • To replace foo with foo_bar in multiple files:

    sed -i 's/foo/foo_bar/g'  *.py
  • Consult the manual pages of the operating system that you use: man sed

  • in the particular case of changing a scaling parameter in a set of experiment files:

    sed -i 's/size = 6/size = 7/g'  experiment*.py
    sed -i 's/size = 7/size = 6/g'  experiment*.py

using vim

  • on the current buffer, with confirmation

    :%s/old_text/new_text/cg
  • on the current buffer

    :%s/old_text/new_text/g
  • to get help

    :help substitute
  • one could pass the required files to 'args' and apply whatever command to all these files using the command 'argdo'. First I will apply the substitute 's' command and then 'update' which will only save the modified files.

    :args *.py
    :argdo :%s/old_text/new_text/g | update

using python

bibdesk + citeulike

  • as described in http://www.academicproductivity.com/2009/citeulike-bibdesk-sync-your-references-and-live-smarter/

  • "Add external file group"

  • enter the http://www.citeulike.org/bibtex/user/LaurentPerrinet?fieldmap=posted-at:date-added&do_username_prefix=0&key_type=4 URL (change the name accordingly)

  • no back sync bidesk > citeUlike except manual export / import workflow

  • to focus on one tag, use something like http://www.citeulike.org/bibtex/user/LaurentPerrinet/tag/motion-energy?fieldmap=posted-at:date-added&do_username_prefix=0&key_type=4

Richard Dawkins on our "queer" universe

http://www.ted.com/talks/richard_dawkins_on_our_queer_universe.html http://dotsub.com/view/2e6446ef-42c7-483a-bbda-df65b1cc4c84/viewTranscript/eng

My title: "Queerer than we can suppose: The strangeness of science." "Queerer than we can suppose" comes from J.B.S. Haldane, the famous biologist, who said, "Now, my own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose. I suspect that there are more things in heaven and earth than are dreamed of, or can be dreamed of, in any philosophy." Richard Feynman compared the accuracy of quantum theories -- experimental predictions -- to specifying the width of North America to within one hair's breadth of accuracy. This means that quantum theory has got to be in some sense true. Yet the assumptions that quantum theory needs to make in order to deliver those predictions are so mysterious that even Feynman himself was moved to remark, "If you think you understand quantum theory, you don't understand quantum theory."

It's so queer that physicists resort to one or another paradoxical interpretation of it. David Deutsch, who's talking here, in The Fabric of Reality, embraces the "many worlds" interpretation of quantum theory, because the worst that you can say about it is that it's preposterously wasteful. It postulates a vast and rapidly growing number of universes existing in parallel -- mutually undetectable except through the narrow porthole of quantum mechanical experiments. And that's Richard Feynman.

The biologist Lewis Wolpert believes that the queerness of modern physics is just an extreme example. Science, as opposed to technology, does violence to common sense. Every time you drink a glass of water, he points out, the odds are that you will imbibe at least one molecule that passed through the bladder of Oliver Cromwell. (Laughter) It's just elementary probability theory. The number of molecules per glassful is hugely greater than the number of glassfuls, or bladdersful, in the world -- and, of course, there's nothing special about Cromwell or bladders. You have just breathed in a nitrogen atom that passed through the right lung of the third iguanodon to the left of the tall cycad tree.

"Queerer than we can suppose." What is it that makes us capable of supposing anything, and does this tell us anything about what we can suppose? Are there things about the universe that will be forever beyond our grasp, but not beyond the grasp of some superior intelligence? Are there things about the universe that are, in principle, ungraspable by any mind, however superior? The history of science has been one long series of violent brainstorms, as successive generations have come to terms with increasing levels of queerness in the universe. We're now so used to the idea that the Earth spins -- rather than the Sun moves across the sky -- it's hard for us to realize what a shattering mental revolution that must have been. After all, it seems obvious that the Earth is large and motionless, the Sun small and mobile. But it's worth recalling Wittgenstein's remark on the subject. "Tell me," he asked a friend, "why do people always say, it was natural for man to assume that the sun went round the earth rather than that the earth was rotating?" His friend replied, "Well, obviously because it just looks as though the Sun is going round the Earth." Wittgenstein replied, "Well, what would it have looked like if it had looked as though the Earth was rotating?" (Laughter)

Science has taught us, against all intuition, that apparently solid things, like crystals and rocks, are really almost entirely composed of empty space. And the familiar illustration is the nucleus of an atom is a fly in the middle of a sports stadium and the next atom is in the next sports stadium. So it would seem the hardest, solidest, densest rock is really almost entirely empty space, broken only by tiny particles so widely spaced they shouldn't count. Why, then, do rocks look and feel solid and hard and impenetrable? As an evolutionary biologist I'd say this: our brains have evolved to help us survive within the orders of magnitude of size and speed which our bodies operate at. We never evolved to navigate in the world of atoms. If we had, our brains probably would perceive rocks as full of empty space. Rocks feel hard and impenetrable to our hands precisely because objects like rocks and hands cannot penetrate each other. It's therefore useful for our brains to construct notions like "solidity" and "impenetrability," because such notions help us to navigate our bodies through the middle-sized world in which we have to navigate.

Moving to the other end of the scale, our ancestors never had to navigate through the cosmos at speeds close to the speed of light. If they had, our brains would be much better at understanding Einstein. I want to give the name "Middle World" to the medium-scaled environment in which we've evolved the ability to take act -- nothing to do with Middle Earth. Middle World. (Laughter) We are evolved denizens of Middle World, and that limits what we are capable of imagining. You find it intuitively easy to grasp ideas like, when a rabbit moves at the -- sort of medium velocity at which rabbits and other Middle World objects move, and hits another Middle World object, like a rock, it knocks itself out.

May I introduce Major General Albert Stubblebine III, commander of military intelligence in 1983. He stared at his wall in Arlington, Virginia, and decided to do it. As frightening as the prospect was, he was going into the next office. He stood up, and moved out from behind his desk. What is the atom mostly made of? he thought. Space. He started walking. What am I mostly made of? Atoms. He quickened his pace, almost to a jog now. What is the wall mostly made of? Atoms. All I have to do is merge the spaces. Then, General Stubblebine banged his nose hard on the wall of his office. Stubblebine, who commanded 16,000 soldiers, was confounded by his continual failure to walk through the wall. He has no doubt that this ability will, one day, be a common tool in the military arsenal. Who would screw around with an army that could do that? That's from an article in Playboy, which I was reading the other day. (Laughter)

I have every reason to think it's true; I was reading Playboy because I, myself, had an article in it. (Laughter) Unaided human intuition schooled in Middle World finds it hard to believe Galileo when he tells us a heavy object and a light object, air friction aside, would hit the ground at the same instant. And that's because in Middle World, air friction is always there. If we'd evolved in a vacuum we would expect them to hit the ground simultaneously. If we were bacteria, constantly buffeted by thermal movements of molecules, it would be different, but we Middle Worlders are too big to notice Brownian motion. In the same way, our lives are dominated by gravity but are almost oblivious to the force of surface tension. A small insect would reverse these priorities.

Steve Grand -- he's the one on the left, Douglas Adams is on the right -- Steve Grand, in his book, Creation: Life and How to Make It, is positively scathing about our preoccupation with matter itself. We have this tendency to think that only solid, material things are really things at all. Waves of electromagnetic fluctuation in a vacuum seem unreal. Victorians thought the waves had to be waves in some material medium -- the ether. But we find real matter comforting only because we've evolved to survive in Middle World, where matter is a useful fiction. A whirlpool, for Steve Grand, is a thing with just as much reality as a rock.

In a desert plain in Tanzania, in the shadow of the volcano Ol Donyo Lengai, there's a dune made of volcanic ash. The beautiful thing is that it moves bodily. It's what's technically known as a barchan, and the entire dune walks across the desert in a westerly direction at a speed of about 17 meters per year. It retains its crescent shape and moves in the direction of the horns. What happens is that the wind blows the sand up the shallow slope on the other side, and then, as each sand grain hits the top of the ridge, it cascades down on the inside of the crescent, and so the whole horn-shaped dune moves. Steve Grand points out that you and I are, ourselves, more like a wave than a permanent thing. He invites us, the reader, to "think of an experience from your childhood -- something you remember clearly, something you can see, feel, maybe even smell, as if you were really there. After all, you really were there at the time, weren't you? How else would you remember it? But here is the bombshell: You weren't there. Not a single atom that is in your body today was there when that event took place. Matter flows from place to place and momentarily comes together to be you. Whatever you are, therefore, you are not the stuff of which you are made. If that doesn't make the hair stand up on the back of your neck, read it again until it does, because it is important."

So "really" isn't a word that we should use with simple confidence. If a neutrino had a brain, which it evolved in neutrino-sized ancestors, it would say that rocks really do consist of empty space. We have brains that evolved in medium-sized ancestors which couldn't walk through rocks. "Really," for an animal, is whatever its brain needs it to be in order to assist its survival, and because different species live in different worlds, there will be a discomforting variety of reallys. What we see of the real world is not the unvarnished world but a model of the world, regulated and adjusted by sense data, but constructed so it's useful for dealing with the real world.

The nature of the model depends on the kind of animal we are. A flying animal needs a different kind of model from a walking, climbing or swimming animal. A monkey's brain must have software capable of simulating a three-dimensional world of branches and trunks. A mole's software for constructing models of its world will be customized for underground use. A water strider's brain doesn't need 3D software at all, since it lives on the surface of the pond in an Edwin Abbott flatland.

I've speculated that bats may see color with their ears. The world model that a bat needs in order to navigate through three dimensions catching insects must be pretty similar to the world model that any flying bird, a day-flying bird like a swallow, needs to perform the same kind of tasks. The fact that the bat uses echoes in pitch darkness to input the current variables to its model, while the swallow uses light, is incidental. Bats, I even suggested, use perceived hues, such as red and blue, as labels, internal labels, for some useful aspect of echoes -- perhaps the acoustic texture of surfaces, furry or smooth and so on, in the same way as swallows or, indeed, we, use those perceived hues -- redness and blueness etcetera -- to label long and short wavelengths of light. There's nothing inherent about red that makes it long wavelength.

And the point is that the nature of the model is governed by how it is to be used, rather than by the sensory modality involved. J. B .S. Haldane himself had something to say about animals whose world is dominated by smell. Dogs can distinguish two very similar fatty acids, extremely diluted: caprylic acid and caproic acid. The only difference, you see, is that one has an extra pair of carbon atoms in the chain. Haldane guesses that a dog would probably be able to place the acids in the order of their molecular weights by their smells, just as a man could place a number of piano wires in the order of their lengths by means of their notes. Now, there's another fatty acid, capric acid, which is just like the other two, except that it has two more carbon atoms. A dog that had never met capric acid would, perhaps, have no more trouble imagining its smell than we would have trouble imagining a trumpet, say, playing one note higher than we've heard a trumpet play before. Perhaps dogs and rhinos and other smell-oriented animals smell in color. And the argument would be exactly the same as for the bats.

Middle World -- the range of sizes and speeds which we have evolved to feel intuitively comfortable with -- is a bit like the narrow range of the electromagnetic spectrum that we see as light of various colors. We're blind to all frequencies outside that, unless we use instruments to help us. Middle World is the narrow range of reality which we judge to be normal, as opposed to the queerness of the very small, the very large and the very fast. We could make a similar scale of improbabilities; nothing is totally impossible. Miracles are just events that are extremely improbable. A marble statue could wave its hand at us; the atoms that make up its crystalline structure are all vibrating back and forth anyway. Because there are so many of them, and because there's no agreement among them in their preferred direction of movement, the marble, as we see it in Middle World, stays rock steady. But the atoms in the hand could all just happen to move the same way at the same time, and again and again. In this case, the hand would move and we'd see it waving at us in Middle World. The odds against it, of course, are so great that if you set out writing zeros at the time of the origin of the universe, you still would not have written enough zeros to this day.

Evolution in Middle World has not equipped us to handle very improbable events; we don't live long enough. In the vastness of astronomical space and geological time, that which seems impossible in Middle World might turn out to be inevitable. One way to think about that is by counting planets. We don't know how many planets there are in the universe, but a good estimate is about ten to the 20, or 100 billion billion. And that gives us a nice way to express our estimate of life's improbability. Could make some sort of landmark points along a spectrum of improbability, which might look like the electromagnetic spectrum we just looked at.

If life has arisen only once on any -- if -- if life could -- I mean, life could originate once per planet, could be extremely common, or it could originate once per star, or once per galaxy or maybe only once in the entire universe, in which case it would have to be here. And somewhere up there would be the chance that a frog would turn into a prince and similar magical things like that. If life has arisen on only one planet in the entire universe, that planet has to be our planet, because here we are talking about it. And that means that if we want to avail ourselves of it, we're allowed to postulate chemical events in the origin of life which have a probability as low as one in 100 billion billion. I don't think we shall have to avail ourselves of that, because I suspect that life is quite common in the universe. And when I say quite common, it could still be so rare that no one island of life ever encounters another, which is a sad thought.

How shall we interpret "queerer than we can suppose?" Queerer than in principle can be supposed, or just queerer than we can suppose, given the limitations of our brain's evolutionary apprenticeship in Middle World? Could we, by training and practice, emancipate ourselves from Middle World and achieve some sort of intuitive, as well as mathematical, understanding of the very small and the very large? I genuinely don't know the answer. I wonder whether we might help ourselves to understand, say, quantum theory, if we brought up children to play computer games, beginning in early childhood, which had a sort of make believe world of balls going through two slits on a screen, a world in which the strange goings on of quantum mechanics were enlarged by the computer's make believe, so that they became familiar on the Middle-World scale of the stream. And, similarly, a relativistic computer game in which objects on the screen manifest the Lorenz Contraction, and so on, to try to get ourselves into the way of thinking -- get children into the way of thinking about it.

I want to end by applying the idea of Middle World to our perceptions of each other. Most scientists today subscribe to a mechanistic view of the mind: we're the way we are because our brains are wired up as they are; our hormones are the way they are. We'd be different, our characters would be different, if our neuro-anatomy and our physiological chemistry were different. But we scientists are inconsistent. If we were consistent, our response to a misbehaving person, like a child murderer, should be something like, this unit has a faulty component; it needs repairing. That's not what we say. What we say -- and I include the most austerely mechanistic among us, which is probably me -- what we say is, "Vile monster, prison is too good for you." Or worse, we seek revenge, in all probability thereby triggering the next phase in an escalating cycle of counter-revenge, which we see, of course, all over the world today. In short, when we're thinking like academics, we regard people as elaborate and complicated machines, like computers or cars, but when we revert to being human we behave more like Basil Fawlty, who, we remember, thrashed his car to teach it a lesson when it wouldn't start on gourmet night. (Laughter)

The reason we personify things like cars and computers is that just as monkeys live in an arboreal world and moles live in an underground world and water striders live in a surface tension-dominated flatland, we live in a social world. We swim through a sea of people -- a social version of Middle World. We are evolved to second-guess the behavior of others by becoming brilliant, intuitive psychologists. Treating people as machines may be scientifically and philosophically accurate, but it's a cumbersome waste of time if you want to guess what this person is going to do next. The economically useful way to model a person is to treat him as a purposeful, goal-seeking agent with pleasures and pains, desires and intentions, guilt, blame-worthiness. Personification and the imputing of intentional purpose is such a brilliantly successful way to model humans, it's hardly surprising the same modeling software often seizes control when we're trying to think about entities for which it's not appropriate, like Basil Fawlty with his car or like millions of deluded people with the universe as a whole. (Laughter)

If the universe is queerer than we can suppose, is it just because we've been naturally selected to suppose only what we needed to suppose in order to survive in the Pleistocene of Africa? Or are our brains so versatile and expandable that we can train ourselves to break out of the box of our evolution? Or, finally, are there some things in the universe so queer that no philosophy of beings, however godlike, could dream them? Thank you very much.

Comment créer et manipuler les données scientifiques : autour de Numpy

Le tableau : l'outil de base du calcul scientifique

Manipulation fréquente d'ensembles ordonnés discrets :

  • temps discrétisé d'une expérience/simulation

  • signal enregistré par un appareil de mesure

  • pixels d'une image, ...

Le module Numpy permet de

  • créer d'un coup ces ensembles de données

  • réaliser des opérations en "batch" sur les tableaux de données (pas de boucle sur les éléments).

Tableau de données := numpy.ndarray

La création de tableaux de données Numpy

Un petit exemple pour commencer:

>>> import numpy as np
>>> a = np.array([0, 1, 2])
>>> a
array([0, 1, 2])
>>> print a
[0 1 2]
>>> b = np.array([[0., 1.], [2., 3.]])
>>> b
array([[ 0.,  1.],
       [ 2.,  3.]])

Dans la pratique, on rentre rarement les éléments un par un...

  • Valeurs espacées régulièrement:

    >>> import numpy as np
    >>> a = np.arange(10) # de 0 a n-1
    >>> a
    array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
    >>> b = np.arange(1., 9., 2) # syntaxe : debut, fin, saut
    >>> b
    array([ 1.,  3.,  5.,  7.])

    ou encore, en spécifiant le nombre de points:

    >>> c = np.linspace(0, 1, 6)
    >>> c
    array([ 0. ,  0.2,  0.4,  0.6,  0.8,  1. ])
    >>> d = np.linspace(0, 1, 5, endpoint=False)
    >>> d
    array([ 0. ,  0.2,  0.4,  0.6,  0.8])
  • Constructeurs pour des tableaux classiques:

    >>> a = np.ones((3,3))
    >>> a
    array([[ 1.,  1.,  1.],
           [ 1.,  1.,  1.],
           [ 1.,  1.,  1.]])
    >>> a.dtype
    dtype('float64')
    >>> b = np.ones(5, dtype=np.int)
    >>> b
    array([1, 1, 1, 1, 1])
    >>> c = np.zeros((2,2))
    >>> c
    array([[ 0.,  0.],
           [ 0.,  0.]])
    >>> d = np.eye(3)
    >>> d
    array([[ 1.,  0.,  0.],
           [ 0.,  1.,  0.],
           [ 0.,  0.,  1.]])

La représentation graphique des données : matplotlib et mayavi

Maintenant que nous avons nos premiers tableaux de données, nous allons les visualiser. Matplotlib est un package de plot 2-D, on importe ces fonctions de la manière suivante

>>> import pylab
>>> # ou
>>> from pylab import * # pour tout importer dans le namespace

Si vous avec lancé Ipython avec python(x,y), ou avec l'option ipython -pylab (sous linux), toutes les fonctions/objets de pylab ont déjà été importées, comme si on avait fait from pylab import *. Dans la suite on suppose qu'on a fait from pylab import * ou lancé ipython -pylab: on n'écrira donc pas pylab.fonction() mais directement fonction.

Tracé de courbes 1-D

In [6]: a = np.arange(20)
In [7]: plot(a, a**2) # line plot
Out[7]: [<matplotlib.lines.Line2D object at 0x95abd0c>]
In [8]: plot(a, a**2, 'o') # symboles ronds
Out[8]: [<matplotlib.lines.Line2D object at 0x95b1c8c>]
In [9]: clf() # clear figure
In [10]: loglog(a, a**2)
Out[10]: [<matplotlib.lines.Line2D object at 0x95abf6c>]
In [11]: xlabel('x') # un peu petit
Out[11]: <matplotlib.text.Text object at 0x98923ec>
In [12]: xlabel('x', fontsize=26) # plus gros
Out[12]: <matplotlib.text.Text object at 0x98923ec>
In [13]: ylabel('y')
Out[13]: <matplotlib.text.Text object at 0x9892b8c>
In [14]: grid()
In [15]: axvline(2)
Out[15]: <matplotlib.lines.Line2D object at 0x9b633cc>

Tableaux 2-D (images par exemple)

In [48]: # Tableaux 30x30 de nombres aleatoires entre 0 et 1
In [49]: image = np.random.rand(30,30)
In [50]: imshow(image)
Out[50]: <matplotlib.image.AxesImage object at 0x9e954ac>
In [51]: gray()
In [52]: hot()
In [53]: imshow(image, cmap=cm.gray)
Out[53]: <matplotlib.image.AxesImage object at 0xa23972c>
In [54]: axis('off') # on enleve les ticks et les labels

Il y a bien d'autres fonctionnalités dans matplotlib : choix de couleurs ou des tailles de marqueurs, fontes latex, inserts à l'intérieur d'une figure, histogrammes, etc.

Pour aller plus loin :

Représentation en 3-D

Pour la visualisation 3-D, on utilise un autre package : Mayavi. Un exemple rapide : commencez par relancer ipython avec les options ipython -pylab -wthread

In [59]: from enthought.mayavi import mlab
In [60]: mlab.figure()
get fences failed: -1
param: 6, val: 0
Out[60]: <enthought.mayavi.core.scene.Scene object at 0xcb2677c>
In [61]: mlab.surf(image)
Out[61]: <enthought.mayavi.modules.surface.Surface object at 0xd0862fc>
In [62]: mlab.axes()
Out[62]: <enthought.mayavi.modules.axes.Axes object at 0xd07892c>

La fenêtre mayavi/mlab qui s'ouvre est interactive : en cliquant sur le bouton gauche de la souris vous pouvez faire tourner l'image, on peut zoomer avec la molette, etc.

Pour plus d'informations sur Mayavi : http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/index.html

Indexage

On peut accéder aux éléments des tableaux Numpy (indexer) d'une manière similaire que pour les autres séquences Python (list, tuple)

>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> a[0], a[2], a[-1]
(0, 2, 9)

Attention ! L'indexage commence à partir de 0, comme pour les autres séquences Python (et comme en C/C++). En Fortran ou Matlab, l'indexage commence à 1.

Pour les tableaux multidimensionnels, l'indice d'un élément est donné par un n-uplet d'entiers

>>> a = np.diag(np.arange(5))
>>> a
array([[0, 0, 0, 0, 0],
       [0, 1, 0, 0, 0],
       [0, 0, 2, 0, 0],
       [0, 0, 0, 3, 0],
       [0, 0, 0, 0, 4]])
>>> a[1,1]
1
>>> a[2,1] = 10 # deuxième ligne, première colonne
>>> a
array([[ 0,  0,  0,  0,  0],
       [ 0,  1,  0,  0,  0],
       [ 0, 10,  2,  0,  0],
       [ 0,  0,  0,  3,  0],
       [ 0,  0,  0,  0,  4]])
>>> a[1]
array([0, 1, 0, 0, 0])

A retenir :

  • En 2-D, la première dimension correspond aux lignes, la seconde aux colonnes.

  • Pour un tableau a à plus qu'une dimension,`a[0]` est interprété en prenant tous les éléments dans les dimensions non-spécifiés.

Slicing (parcours régulier des éléments)

Comme l'indexage, similaire au slicing des autres séquences Python:

>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> a[2:9:3] # [début:fin:pas]
array([2, 5, 8])

Attention, le dernier indice n'est pas inclus

>>> a[:4]
array([0, 1, 2, 3])

début:fin:pas est un objet slice, qui représente l'ensemble d'indices range(début, fin, pas). On peut créer explicitement un slice

>>> sl = slice(1, 9, 2)
>>> a = np.arange(10)
>>> b = 2*a + 1
>>> a, b
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([ 1,  3,  5,  7,  9, 11, 13, 15, 17, 19]))
>>> a[sl], b[sl]
(array([1, 3, 5, 7]), array([ 3,  7, 11, 15]))

On n'est pas obligé de spécifier à la fois le début (indice 0 par défaut), la fin (dernier indice par défaut) et le pas (1 par défaut):

>>> a[1:3]
array([1, 2])
>>> a[::2]
array([0, 2, 4, 6, 8])
>>> a[3:]
array([3, 4, 5, 6, 7, 8, 9])

Et bien sûr, ça marche pour les tableaux à plusieurs dimensions:

>>> a = np.eye(5)
>>> a
array([[ 1.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  1.]])
>>> a[2:4,:3] #2è et 3è lignes, trois premières colonnes
array([[ 0.,  0.,  1.],
       [ 0.,  0.,  0.]])

On peut changer la valeur de tous les éléments indexés par une slice de façon très simple

>>> a[:3,:3] = 4
>>> a
array([[ 4.,  4.,  4.,  0.,  0.],
       [ 4.,  4.,  4.,  0.,  0.],
       [ 4.,  4.,  4.,  0.,  0.],
       [ 0.,  0.,  0.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  1.]])

Une opération de slicing crée une vue (view) du tableau d'origine, c'est-à-dire une manière d'aller lire dans la mémoire. Le tableau d'origine n'est donc pas copié. Quand on modifie la vue, on modife aussi le tableau d'origine.:

>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> b = a[::2]; b
array([0, 2, 4, 6, 8])
>>> b[0] = 12
>>> b
array([12,  2,  4,  6,  8])
>>> a # a a été modifié aussi !
array([12,  1,  2,  3,  4,  5,  6,  7,  8,  9])

Ce comportement peut surprendre au début... mais est bien pratique pour gérer la mémoire de façon économe.

Si on veut faire une copie différente du tableau d'origine

>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> b = np.copy(a[::2]); b
array([0, 2, 4, 6, 8])
>>> b[0] = 12
>>> b
array([12,  2,  4,  6,  8])
>>> a # a n'a pas été modifié
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

Manipuler la forme des tableaux

On obtient la forme d'un tableau grâce à la méthode ndarray.shape qui retourne un tuple des dimensions du tableau

>>> a = np.arange(10)
>>> a.shape
(10,)
>>> b = np.ones((3,4))
>>> b.shape
(3, 4)
>>> b.shape[0] # on peut accéder aux élements du tuple b.shape
3
>>> # et on peut aussi faire
>>> np.shape(b)
(3, 4)

Par ailleurs on obtient la longueur de la première dimension avec np.alen (par analogie avec len pour une liste) et le nombre total d'éléments avec ndarray.size:

>>> np.alen(b)
3
>>> b.size
12

Il existe plusieurs fonctions Numpy qui permettent de créer un tableau de taille différente à partir d'un tableau de départ.:

>>> a = np.arange(36)
>>> b = a.reshape((6, 6))
>>> b
array([[ 0,  1,  2,  3,  4,  5],
       [ 6,  7,  8,  9, 10, 11],
       [12, 13, 14, 15, 16, 17],
       [18, 19, 20, 21, 22, 23],
       [24, 25, 26, 27, 28, 29],
       [30, 31, 32, 33, 34, 35]])

ndarray.reshape renvoie une vue, et pas une copie

>>> b[0,0] = 10
>>> a
array([10,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16,
       17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
       34, 35])

On peut aussi créer un tableau avec un nombre d'éléments différents avec ndarray.resize:

>>> a = np.arange(36)
>>> a.resize((4,2))
>>> a
array([[0, 1],
       [2, 3],
       [4, 5],
       [6, 7]])
>>> b = np.arange(4)
>>> b.resize(3, 2)
>>> b
array([[0, 1],
       [2, 3],
       [0, 0]])

Ou paver un grand tableau à partir d'un tableau plus petit

>>> a = np.arange(4).reshape((2,2))
>>> a
array([[0, 1],
       [2, 3]])
>>> np.tile(a, (2,3))
array([[0, 1, 0, 1, 0, 1],
       [2, 3, 2, 3, 2, 3],
       [0, 1, 0, 1, 0, 1],
       [2, 3, 2, 3, 2, 3]])

Exercices : quelques gammes avec les tableaux numpy

Grâce aux divers constructeurs, à l'indexage et au slicing, et aux opérations simples sur les tableaux (+/-/x/:), on peut facilement créer des tableaux de grande taille correspondant à des motifs variés.

Exemple : comment créer le tableau:

[[ 0  1  2  3  4]
 [ 5  6  7  8  9]
 [10 11 12 13  0]
 [15 16 17 18 19]
 [20 21 22 23 24]]

Réponse

>>> a = np.arange(25).reshape((5,5))
>>> a[2, 4] = 0

Exercices : créer les tableaux suivants de la manière la plus simple possible (pas élément par élement)

[[ 1.  1.  1.  1.]
 [ 1.  1.  1.  1.]
 [ 1.  1.  1.  2.]
 [ 1.  6.  1.  1.]]

[[0 0 0 0 0]
 [2 0 0 0 0]
 [0 3 0 0 0]
 [0 0 4 0 0]
 [0 0 0 5 0]
 [0 0 0 0 6]]

De "vraies données" : lire et écrire des tableaux dans des fichiers

Bien souvent, nos expériences ou nos simulations écrivent leurs résultats dans des fichiers. Il faut ensuite les charger dans Python sous la forme de tableaux Numpy pour les manipuler. De même, on peut vouloir sauver les tableaux qu'on a obtenus dans des fichiers.

Aller dans le bon répertoire

Pour se déplacer dans une arborescence de fichiers :

  • utiliser les fonctionnalités d'Ipython : cd, pwd, tab-completion.

  • modules os (routines système) et os.path (gestion des chemins)

    >>> import os, os.path
    >>> current_dir = os.getcwd()
    >>> current_dir
    '/home/gouillar/sandbox'
    >>> data_dir = os.path.join(current_dir, 'data')
    >>> data_dir
    '/home/gouillar/sandbox/data'
    >>> if not(os.path.exists(data_dir)):
    ...     os.mkdir('data')
    ...     print "creation du repertoire 'data'"
    ...
    >>> os.chdir(data_dir) # ou dans Ipython : cd data

On peut en fait se servir de Ipython comme d'un véritable shell grâce aux fonctionnalités d'Ipython et au module os.

Ecrire un tableau de données dans un fichier

>>> a = np.arange(100)
>>> a = a.reshape((10, 10))
  • Ecriture dans un fichier texte (en ascii)

    >>> np.savetxt('data_a.txt', a)
  • Ecriture dans un fichier en binaire (extension .npy)

    >>> np.save('data_a.npy', a)

Charger un tableau de données à partir d'un fichier

  • Lecture dans un fichier texte

    >>> b = np.loadtxt('data_a.txt')
  • Lecture dans un fichier binaire

    >>> c = np.load('data_a.npy')

Pour lire les fichiers de données matlab

scipy.io.loadmat : la structure matlab d'un fichier .mat est stockée dans un dictionnaire.

Pour sélectionner un fichier dans une liste

On va sauver chaque ligne de a dans un fichier différent

>>> for i, l in enumerate(a):
...     print i, l
...     np.savetxt('ligne_'+str(i), l)
...
0 [0 1 2 3 4 5 6 7 8 9]
1 [10 11 12 13 14 15 16 17 18 19]
2 [20 21 22 23 24 25 26 27 28 29]
3 [30 31 32 33 34 35 36 37 38 39]
4 [40 41 42 43 44 45 46 47 48 49]
5 [50 51 52 53 54 55 56 57 58 59]
6 [60 61 62 63 64 65 66 67 68 69]
7 [70 71 72 73 74 75 76 77 78 79]
8 [80 81 82 83 84 85 86 87 88 89]
9 [90 91 92 93 94 95 96 97 98 99]

Pour obtenir une liste de tous les fichiers commençant par ligne, on fait appel au module glob qui "gobe" tous les chemins correspondant à un motif. Exemple

>>> import glob
>>> filelist = glob.glob('ligne*')
>>> filelist
['ligne_0', 'ligne_1', 'ligne_2', 'ligne_3', 'ligne_4', 'ligne_5', 'ligne_6', 'ligne_7', 'ligne_8', 'ligne_9']
>>> # attention la liste n'est pas toujours ordonnee
>>> filelist.sort()
>>> l2 = np.loadtxt(filelist[2])

Remarque : il est aussi possible de créer des tableaux à partir de fichiers Excel/Calc, de fichiers hdf5, etc. (mais à l'aide de modules supplémentaires non décrits ici : xlrd, pytables, etc.).

Opérations mathématiques et statistiques simples sur les tableaux

Un certain nombre d'opérations sur les tableaux sont codées directement dans numpy (et sont donc en général très efficaces):

>>> a = np.arange(10)
>>> a.min() # ou np.min(a)
0
>>> a.max() # ou np.max(a)
9
>>> a.sum() # ou np.sum(a)
45

Il est possible de réaliser l'opération le long d'un axe uniquement, plutôt que sur tous les éléments

>>> a = np.array([[1, 3], [9, 6]])
>>> a
array([[1, 3],
       [9, 6]])
>>> a.mean(axis=0) # tableau contenant la moyenne de chaque colonne
array([ 5. ,  4.5])
>>> a.mean(axis=1) # tableau contenant la moyenne de chaque ligne
array([ 2. ,  7.5])

Il y en a encore bien d'autres opérations possibles : on en découvrira quelques unes au fil de ce cours.

Note

Les opérations arithmétiques sur les tableaux correspondent à des opérations élément par élément. En particulier, le produit n'est pas un produit matriciel (contrairement à Matlab) ! Le produit matriciel est fourni par np.dot:

>>> a = np.ones((2,2))
>>> a*a
array([[ 1.,  1.],
       [ 1.,  1.]])
>>> np.dot(a,a)
array([[ 2.,  2.],
       [ 2.,  2.]])

Exemple : simulation de diffusion avec un marcheur aléatoire

Quelle est la distance typique d'un marcheur aléatoire à l'origine, après t sauts à droite ou à gauche ?

>>> nreal = 1000 # nombre de réalisations de la marche
>>> tmax = 200 # temps sur lequel on suit le marcheur
>>> # On tire au hasard tous les pas 1 ou -1 de la marche
>>> walk = 2 * ( np.random.random_integers(0, 1, (nreal,tmax)) - 0.5 )
>>> np.unique(walk) # Vérification : tous les pas font bien 1 ou -1
array([-1.,  1.])
>>> # On construit les marches en sommant ces pas au cours du temps
>>> cumwalk = np.cumsum(walk, axis=1) # axis = 1 : dimension du temps
>>> sq_distance = cumwalk**2
>>> # On moyenne dans le sens des réalisations
>>> mean_sq_distance = np.mean(sq_distance, axis=0)
In [39]: figure()
In [40]: plot(mean_sq_distance)
In [41]: figure()
In [42]: plot(np.sqrt(mean_sq_distance))

On retrouve bien que la distance grandit comme la racine carrée du temps !

Exercice : statistiques des femmes dans la recherche (données INSEE)

  1. Récupérer les fichiers organismes.txt et pourcentage_femmes.txt (clé USB du cours ou http://www.dakarlug.org/pat/scientifique/data/).

  2. Créer un tableau data en ouvrant le fichier pourcentage_femmes.txt avec np.loadtxt. Quelle est la taille de ce tableau ?

  3. Les colonnes correspondent aux années 2006 à 2001. Créer un tableau annees (sans accent !) contenant les entiers correspondant à ces années.

  4. Les différentes lignes correspondent à différents organismes de recherche dont les noms sont stockés dans le fichier organismes.txt. Créer un tableau organisms en ouvrant ce fichier. Attention : comme np.loadtxt crée par défaut des tableaux de flottant, il faut lui préciser qu'on veut créer un tableau de strings : organisms = np.loadtxt('organismes.txt', dtype=str)

  5. Vérifier que le nombre de lignes de data est égal au nombre d'organismes.

  6. Quel est le pourcentage maximal de femmes dans tous les organismes, toutes années confondues ?

  7. Créer un tableau contenant la moyenne temporelle du pourcentage de femmes pour chaque organisme (i.e., faire la moyenne de data suivant l'axe No 1).

  8. Quel organisme avait le pourcentage de femmes le plus élevé en 2004 ? (Indice np.argmax).

  9. Représenter un histogramme du pourcentage de femmes dans les

    différents organismes en 2006 (indice : np.histogram, puis bar ou plot de matplotlib pour la visualisation).

L'indexage avancé (fancy indexing)

On peut indexer des tableaux numpy avec des slices, mais aussi par des tableaux de booléens (les masques) ou d'entiers : on appelle ces opérations plus évoluées du fancy indexing.

Les masques

>>> np.random.seed(3)
>>> a = np.random.random_integers(0, 20, 15)
>>> a
array([10,  3,  8,  0, 19, 10, 11,  9, 10,  6,  0, 20, 12,  7, 14])
>>> (a%3 == 0)
array([False,  True, False,  True, False, False, False,  True, False,
        True,  True, False,  True, False, False], dtype=bool)
>>> mask = (a%3 == 0)
>>> extract_from_a = a[mask] #on pourrait écrire directement a[a%3==0]
>>> extract_from_a #on extrait un sous-tableau grâce au masque
array([ 3,  0,  9,  6,  0, 12])

Extraire un sous-tableau avec un masque produit une copie de ce sous-tableau, et non une vue

>>> extract_from_a = -1
>>> a
array([10,  3,  8,  0, 19, 10, 11,  9, 10,  6,  0, 20, 12,  7, 14])

L'indexation grâce masques peut être très utile pour l'assignation d'une nouvelle valeur à un sous-tableau

>>> a[mask] = 0
>>> a
array([10,  0,  8,  0, 19, 10, 11,  0, 10,  0,  0, 20,  0,  7, 14])

Indexer avec un tableau d'entiers

>>> a = np.arange(10)
>>> a[::2] +=3 #pour ne pas avoir toujours le même np.arange(10)...
>>> a
array([ 3,  1,  5,  3,  7,  5,  9,  7, 11,  9])
>>> a[[2, 5, 1, 8]] # ou a[np.array([2, 5, 1, 8])]
array([ 5,  5,  1, 11])

On peut indexer avec des tableaux d'entiers où le même indice est répété plusieurs fois

>>> a[[2, 3, 2, 4, 2]]
array([5, 3, 5, 7, 5])

On peut assigner de nouvelles valeurs avec ce type d'indexation

>>> a[[9, 7]] = -10
>>> a
array([  3,   1,   5,   3,   7,   5,   9, -10,  11, -10])
>>> a[[2, 3, 2, 4, 2]] +=1
>>> a
array([  3,   1,   6,   4,   8,   5,   9, -10,  11, -10])

Quand on crée un tableau en indexant avec un tableau d'entiers, le nouveau tableau a la même forme que le tableau d'entiers

>>> a = np.arange(10)
>>> idx = np.array([[3, 4], [9, 7]])
>>> a[idx]
array([[3, 4],
       [9, 7]])
>>> b = np.arange(10)

>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])
>>> i = np.array( [ [0,1],
...              [1,2] ] )
>>> j = np.array( [ [2,1],
...              [3,3] ] )
>>> a[i,j]
array([[ 2,  5],
       [ 7, 11]])

Exercice

Reprenons nos données de statistiques du pourcentage de femmes dans la recherche (tableaux data et organisms)

  1. Créer un tableau sup30 de même taille que data valant 1 si la valeur de data est supérieure à 30%, et 0 sinon.

  2. Créez un tableau contenant l'organisme avec le pourcentage de femmes le plus élévé de chaque année.

Le broadcasting

Les opérations élémentaires sur les tableaux numpy (addition, etc.) sont faites élément par élément et opèrent donc des tableaux de même taille. Il est néanmoins possible de faire des opérations sur des tableaux de taille différente si numpy` arrive à transformer les tableaux pour qu'ils aient tous la même taille : on appelle cette transformation le broadcasting (jeu de mots intraduisible en français).

L'image ci-dessous donne un exemple de broadcasting

ce qui donne dans Ipython:

>>> a = np.arange(0, 40, 10)
>>> b = np.arange(0, 3)
>>> a = a.reshape((4,1)) #il faut transformer a en tableau "vertical"
>>> a + b
array([[ 0,  1,  2],
       [10, 11, 12],
       [20, 21, 22],
       [30, 31, 32]])

On a déjà utilisé le broadcasting sans le savoir

>>> a = np.arange(20).reshape((4,5))
>>> a
array([[ 0,  1,  2,  3,  4],
       [ 5,  6,  7,  8,  9],
       [10, 11, 12, 13, 14],
       [15, 16, 17, 18, 19]])
>>> a[0] = 1 # on égale deux tableaux de dimension 1 et 0
>>> a[:3] = np.arange(1,6)
>>> a
array([[ 1,  2,  3,  4,  5],
       [ 1,  2,  3,  4,  5],
       [ 1,  2,  3,  4,  5],
       [15, 16, 17, 18, 19]])

On peut même utiliser en même temps le fancy indexing et le broadcasting : reprenons un exemple déjà utilisé plus haut

>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])
>>> i = np.array( [ [0,1],
...              [1,2] ] )
>>> a[i, 2] # même chose que a[i, 2*np.ones((2,2), dtype=int)]
array([[ 2,  6],
       [ 6, 10]])

Le broadcasting peut sembler un peu magique, mais il est en fait assez naturel de l'utiliser dès qu'on veut veut résoudre un problème où on obtient en sortie un tableau avec plus de dimensions que les données en entrée.

Exemple : construisons un tableau de distances (en miles) entre les villes de la route 66 : Chicago, Springfield, Saint-Louis, Tulsa, Oklahoma City, Amarillo, Santa Fe, Albucquerque, Flagstaff et Los Angeles.

>>> mileposts = np.array([0, 198, 303, 736, 871, 1175, 1475, 1544,
...                         1913, 2448])
>>> tableau_de_distances = np.abs(mileposts - mileposts[:,np.newaxis])
>>> tableau_de_distances
array([[   0,  198,  303,  736,  871, 1175, 1475, 1544, 1913, 2448],
       [ 198,    0,  105,  538,  673,  977, 1277, 1346, 1715, 2250],
       [ 303,  105,    0,  433,  568,  872, 1172, 1241, 1610, 2145],
       [ 736,  538,  433,    0,  135,  439,  739,  808, 1177, 1712],
       [ 871,  673,  568,  135,    0,  304,  604,  673, 1042, 1577],
       [1175,  977,  872,  439,  304,    0,  300,  369,  738, 1273],
       [1475, 1277, 1172,  739,  604,  300,    0,   69,  438,  973],
       [1544, 1346, 1241,  808,  673,  369,   69,    0,  369,  904],
       [1913, 1715, 1610, 1177, 1042,  738,  438,  369,    0,  535],
       [2448, 2250, 2145, 1712, 1577, 1273,  973,  904,  535,    0]])

Warning

Bonnes pratiques

Sur l'exemple précédent, on peut noter quelques bonnes (et mauvaises) pratiques :

  • Donner des noms de variables explicites (pas besoin d'un commentaire pour expliquer ce qu'il y a dans la variable).

  • Mettre des espaces après les virgules, autour des =, etc. Un certain nombre de règles pour écrire du "beau" code (et surtout, utiliser les mêmes conventions que tout le monde !) sont données par le Style Guide for Python Code et la page Docstring Conventions (pour organiser les messages d'aide).

  • Sauf exception (ex : cours pour francophones ?), donner des noms de variables en anglais, et écrire les commentaires en anglais (imaginez récupérer un code commenté en russe...).

Beaucoup de problèmes sur grille ou réseau peuvent aussi utiliser du broadcasting. Par exemple, si on veut calculer la distance à l'origine des points sur une grille 10x10, on peut faire

>>> x, y = np.arange(5), np.arange(5)
>>> distance = np.sqrt(x**2 + y[:, np.newaxis]**2)
>>> distance
array([[ 0.        ,  1.        ,  2.        ,  3.        ,  4.        ],
       [ 1.        ,  1.41421356,  2.23606798,  3.16227766,  4.12310563],
       [ 2.        ,  2.23606798,  2.82842712,  3.60555128,  4.47213595],
       [ 3.        ,  3.16227766,  3.60555128,  4.24264069,  5.        ],
       [ 4.        ,  4.12310563,  4.47213595,  5.        ,  5.65685425]])

On peut représenter les valeurs du tableau distance en niveau de couleurs grâce à la fonction pylab.imshow (syntaxe : pylab.imshow(distance). voir l'aide pour plus d'options).

Remarque : la fonction numpy.ogrid permet de créer directement les vecteurs x et y de l'exemple précédent avec deux "dimensions significatives" différentes

>>> x, y = np.ogrid[0:5, 0:5]
>>> x, y
(array([[0],
       [1],
       [2],
       [3],
       [4]]), array([[0, 1, 2, 3, 4]]))
>>> x.shape, y.shape
((5, 1), (1, 5))
>>> distance = np.sqrt(x**2 + y**2)

np.ogrid est donc très utile dès qu'on a des calculs à faire sur un réseau. np.mgrid fournit par contre directement des matrices pleines d'indices pour les cas où on ne peut/veut pas profiter du broadcasting

>>> x, y = np.mgrid[0:4, 0:4]
>>> x
array([[0, 0, 0, 0],
       [1, 1, 1, 1],
       [2, 2, 2, 2],
       [3, 3, 3, 3]])
>>> y
array([[0, 1, 2, 3],
       [0, 1, 2, 3],
       [0, 1, 2, 3],
       [0, 1, 2, 3]])

Exercice de synthèse : un médaillon pour Lena

Nous allons faire quelques manipulations sur les tableaux numpy en partant de la célébre image de Lena (http://www.cs.cmu.edu/~chuck/lennapg/). scipy fournit un tableau 2D de l'image de Lena avec la fonction scipy.lena

>>> import scipy
>>> lena = scipy.lena()

Voici quelques images que nous allons obtenir grâce à nos manipulations : utiliser différentes colormaps, recadrer l'image, modifier certaines parties de l'image.

  • Utilisons la fonction imshow de pylab pour afficher l'image de Lena.

In [3]: import pylab
In [4]: lena = scipy.lena()
In [5]: pylab.imshow(lena)
  • Lena s'affiche alors en fausses couleurs, il faut spécifier une colormap pour qu'elle s'affiche en niveaux de gris.

In [6]: pylab.imshow(lena, pl.cm.gray)
In [7]: # ou
In [8]: gray()
  • Créez un tableau où le cadrage de Lena est plus serré : enlevez par exemple 30 pixels de tous les côtés de l'image. Affichez ce nouveau tableau avec imshow pour vérifier.

In [9]: crop_lena = lena[30:-30,30:-30]
  • On veut maintenant entourer le visage de Lena d'un médaillon noir. Pour cela, il faut

    • créer un masque correspondant aux pixels qu'on veut mettre en noir. Le masque est défini par la condition (y-256)**2 + (x-256)**2

In [15]: y, x = np.ogrid[0:512,0:512] # les indices x et y des pixels
In [16]: y.shape, x.shape
Out[16]: ((512, 1), (1, 512))
In [17]: centerx, centery = (256, 256) # centre de l'image
In [18]: mask = ((y - centery)**2 + (x - centerx)**2)> 230**2

puis

  • affecter la valeur 0 aux pixels de l'image correspondant au masque. La syntaxe pour cela est extrêmement simple et intuitive :

In [19]: lena[mask]=0
In [20]: imshow(lena)
Out[20]: <matplotlib.image.AxesImage object at 0xa36534c>
  • Question subsidiaire : recopier toutes les instructions de cet exercice dans un script medaillon_lena.py puis exécuter ce script dans Ipython avec %run medaillon_lena.py.

Conclusion : que faut-il savoir faire sur les tableaux numpy pour démarrer ?

  • Savoir créer des tableaux : array, arange, ones, zeros.

  • Connaître la forme du tableau avec array.shape, puis faire du slicing pour obtenir différentes vues du tableau : array[::2], etc. Changer la forme du tableau avec reshape.

  • Obtenir une partie des éléments d'un tableau et/ou en modifier la valeur grâce aux masques

    >>> a[a<0] = 0
  • Savoir faire quelques opérations sur les tableaux comme trouver le max ou la moyenne (array.max(), array.mean()). Pas la peine de tout retenir, mais avoir le réflexe de chercher dans la doc

  • Pour une utilisation plus avancée : maîtriser l'indexation avec des tableaux d'indices entiers, et le broadcasting. Connaître plus de fonctions de numpy permettant de réaliser des opérations sur les tableaux.

using grin

  • I just discovered grin, "grep my way"

Warning

This post is certainly obsolete...

  • install:

    sudo easy_install grin
  • search recursively

    grin  expression
  • search recursively in a specific directory

    grin  expression /this/directory
  • search recursively python files

    grin -I "*.py"  expression

securing the server

  • SSH (Secure Shell) is installed on most systems (here GnuLinuxUbuntu and MacOsX) so don't panic about compilations (try Putty on Windows). Try a simple ssh -V to check version or which ssh to locate the binary.

  • Thanks to ssh, you can transport all your data (accessing files, merging repositories, lauching remote X programs) transparently using a secure connection. Thanks to tunneling, this is also simpler thus more secure for your computer and your provider. Having all security located in one interface sure is a big advantage: once your SSH communication channel is set-up, you should only focus on what you wish to do (SVN, etc...).

  • Most documentation may be found in man ssh, man ssh-keygen (remember that thanks to the underlying pager system, you can search for a keyword, for instance hello, by typing \hello[ENTER]). Many other sources of help exist, such as this FAQ

Setting up SSH: spreading the good keys

  1. There are many ways to authenticate your session, but mainly password or keys. Keys are to be preferred to avoid typing your password 10 times a day. It is also most secure (you type your key's password locally and not remotely).

  2. Generate a private/public key pair. Simple command to do this:

    ssh-keygen -t rsa
  3. Copy the key to the

    ssh-copy-id -i ~/.ssh/id_rsa.pub username@host

    . this can be also be done using

    scp ~/.ssh/id_rsa.pub username@host:~/mykey.pub
    ssh username@host
    cat mykey.pub >> .ssh/authorized_keys
  4. Now try logging into the remote machine again from local

    ssh REMOTE_USERNAME@remote_host
  5. Check that your public key is in the list of authorized keys: .ssh/authorized_keys.

  6. Change password regularly:

    ssh-keygen -p

    It is not advised to put an empty pass-phrase, rather use key agent (see below).

Aliasing

  • it is possible to create alias of the ssh binary to hostnames... but more simply, you may put

    alias myserver='ssh -Y -p2221 myuser@myserver.domain.com'

    where 2221 is here the port used by the SSH server on myserver.domain.com

  • more cleanly, you may edit your `` .ssh/config`` file with:

    Host myserver.domain.com
            User myuser
            Port 2221

    Be careful that properties are right : chmod 600 ~/.ssh/config

key agent

  • An agent loads your keys on the local machines:

    • it's more secure, since all passwords are typed locally, you only send encrypted authentifications

    • it's more practical, since you type your password once per session

  • http://www.sshkeychain.org/mirrors/SSH-with-Keys-HOWTO/

  • GUI interface on MacOsX : http://www.sshkeychain.org/

    • install with macports using sudo port install SSHKeychain, you'll find it in /Applications/MacPorts

tunnels

securing the server

  • Robots usually try common name / password combinations on your SSH server. If you're the only user admin_name of your server you may use in the SSH server configuration file (usually /etc/ssh/sshd_config) the option AllowUsers admin_name to restrict access to user admin_name and avoid brute force attacks. Since robots are most of the time dumb, they'll get an immediate acces denied response to any connection request.

  • Robots usually sniff port 22. To change the port which is listened by the SSH server, either modify the default port in the SSH server configuration file (usually /etc/ssh/sshd_config). Another way is to use your router to redirect the outside port (for instance 2221) to the default port of your server.

Neuroinformatique et neurosciences computationnelles

Le programme interdisciplinaire "Neuroinformatique et neurosciences computationnelles" lance son appel d'offres 2010 Neuroinformatique et neurosciences computationnelles - Appel d'offres:

notre proposition

  • APPEL A PROPOSITIONS

    PROGRAMME INTERDISCIPLINAIRE CNRS

    • Neurosciences et neuroinformatiquecomputationnelle Neuro-IC

    • Déclaration de candidature

      • Titre bref du Projet:(maximum 20 caractères) Émergencein computo

Titre long du Projet:(maximum 3 lignes) Collaborations multi-disciplinaires entre des équipes de neurosciences marseillaises pour une approche intégrative de l'étude de stratégies computationnelles.

Coordinateur du projet : M. Prénom : Laurent Nom :PERRINET Fonction :CR CNRS Laboratoire (nom complet et sigle, le cas échéant) : Institut de Neurosciences Cognitives de la Méditerranée (INCM – UMR 6193) Adresse complète du laboratoire : 31, chemin Joseph Aiguier 13402 Marseille cedex Courriel : Laurent.Perrinet@incm.cnrs-mrs.fr Tél :(0-33) 4 91 16 43 08 Fax :(0-33) 4 91 22 08 75

court CV du porteur du projet (1 page)

  • Chercheur (CR1) à l**'INCM,CNRS, depuis 10/2004 · Sous la conduite de Guillaume Masson à l'Institut de Neurosciences Cognitives de la Méditerranée (INCM-UMR 6193, CNRS) à Marseille, j'étudie des modèles d'inférence spatio-temporels dans des flux video. Cette étude a pour but de définir un algorithme neuro-mimétique adaptatif de représentation sur-complète d'un flux video que nous désirons appliquer à l'extraction du champ de vitesse.

  • Doctorat de Sciences cognitivesONERA/DTIM, Toulouse (France) 10/1999-2/2003 - Titre : Comment déchiffrer le code impulsionnel de la Vision? Étude du flux parallèle, asynchrone et épars dans le traitement visuel ultra-rapide. · Allocataire d'une bourse MENRT, accueil à l'ONERA/DTIM. · Cette thèse a été initiée par les résultats de la collaboration pendant le stage de DEA. Elle a été dirigée par Manuel Samuelides (professeur à Supaéro et chargé de recherche à l'ONERA/DTIM) et co-dirigée par Simon Thorpe (directeur de recherche au CerCo)

  • DEA de Sciences cognitives (Univ. Paris VII, P. Sabatier, EHESS, Polytechnique), Paris (France), mention TB. Allocataire d'une bourse de DEA. 9/1998-9/1999 · 3/1999-7/1999 Assistant de recherche, ONERA/DTIM (Département de Traitement de l'Image et de Modélisation), Toulouse (stage de DEA). · 7/1999-8/1999 Assistant de recherche, USAFB (Rome, NY) / University of San Diego in California (États-Unis).

  • Diplôme d'ingénieurSupaéro, Toulouse, France. 1993 - 1998 Spécialisation dans le traitement du signal et de l'image et plus particulièrement dans les techniques des réseaux de neurones artificiels. · 4/1998-9/1998 Assistant de recherche, CerCo (CNRS, UMR5549), Toulouse (stage de fin d'études d'ingénieur). Développement d'un réseau de neurones asynchrone appliqué à la reconnaissance de caractère. · 4/1997-9/1997 Assistant de recherche, Jet Propulsion Laboratory (Nasa), Pasadena, Californie. Département des Sciences de la Terre, Laboratoire d'imagerie radar, Interférométrie SAR · 9/1995-6/1996 Ingénieur Alcatel, Vienne (Autriche). Département du Voice Processing Systems. Budget demandé (sommairement présenté) et durée du projet: 30 k€ /2 ans Le budget sera notifié en une fois. Le programme n’attribuera aucun moyen humain. - 20 k€ : matériel computationnel (voir devis inclus), - 7k€: organisation d'un colloque et d'un atelier, - 3k€: missions, communication. Ce projet a-t-il été déjà évalué ou fait-il l’objet d’une demande en cours ? Dans quel cadre ? A quelle date ? Ce projet n'a pas été évalué précédemment. Plus précisément, ce projet fait-il partie d’une demande présentée à l’ANR dans les deux dernières années ou sera –t-il présenté dans l’année en cours ? Non. A-t-il été accepté ? Si en cours, à quelle date prévoyez-vous une réponse ? Demandez-vous un autre financement ? Précisez dans quel cadre, la somme demandée et la durée du contrat. Non.

Équipes participant au projet:

Nom de l’équipe

Nom et prénom du responsable

N° d’identification du laboratoire

DyVA (Anna Montagnini, Laurent Perrinet)

MASSON Guillaume

UMR 6193

DNA (Andrea Brovelli)

BOUSSAOUD Driss

UMR 6193

Neurosciences Théoriques et Système Complexes (Jean-Luc Blanc)

PEZARD Laurent

UMR 6149

Contrôle et apprentissage des déplacements finalisés (Emmanuel Daucé)

MONTAGNE Gilles

UMR 6233

Émergence'' in computo''

__'''Résumé du projet :'''__

Actuellement, une nouvelle classe de calculateurs émerge qui privilégie le parallélisme sur la vitesse de calcul du processeur central. Cette nouvelle technologie diverge de l'architecture classique de Von Neumann pour se rapprocher d'architectures neuro-mimétiques. En particulier, elle nécessite d'étudier spécifiquement comment une telle architecture peut coordonner des calculs distribués, à différentes échelles, mais aussi comment cette architecture peut devenir adaptative et apprendre en fonction de coûts fonctionnels déterminés. Elle offre donc une double chance pour la communauté des neurosciences computationnelles: elle donne des outils de calcul plus efficaces car plus proches des modèles neuro-mimétiques, mais aussi ouvrent un champ d'application pour cette nouvelle classe de calculateurs. Ce projet vise à regrouper différents acteurs marseillais venant de disciplines différentes (neuro-physiologie, psychologie, modélisation, statistique inférentielle, théorie de l'information) autour du thème de l'émergence de nouvelles stratégies computationnelles dans des circuits neuro-mimétiques.

__'''Exposé scientifique du projet:'''__

''' Contexte '''

A l'interface des études électro-physiologiques et de la modélisation, la simulation de grands réseaux de neurones a pour but de tester in silico certaines hypothèses fonctionnelles concernant l'activité et les interactions entre populations de neurones, allant de quelques centaines à quelques centaines de milliers. Ces études visent à pallier certains angles morts liés à l'absence d'outils de mesure concernant principalement les schémas de connexion. L'idée est de pouvoir tester avec un outil fiable et normalisé les hypothèses portant sur ces circuits et les patrons d'interaction de l'échelle du micron à quelques millimètres.

L'outil informatique est un complément essentiel des études électro-physiologiques et est malheureusement encore trop peu développé. Il s'agit donc avant tout d'une culture de la simulation qui est à promouvoir et développer dans les laboratoires de neurosciences. Dans le cadre du projet de regroupement de plusieurs laboratoires sur le site de la Timone à Marseille, un groupe de travail sur les neurosciences computationnelles s'est formé. Le but de ce groupe est bien sûr de favoriser les échanges scientifiques sur des problématiques propres aux neurosciences computationnelles, ainsi que d'animer la communauté par des invitation et des ateliers ponctuels. Or ce groupe possède un fort potentiel interdisciplinaire et il apparaît que son activité pourrait également se structurer autour d'un projet commun, sur lequel chacun apporterait sa pierre : une plate-forme de développement reposant sur une architecture multi-processeur massivement parallèle, utilisant des standards ouverts et actuels (python, MPI,...). L'idée est de construire un outil de simulation propre à la communauté, destiné à focaliser toute la demande en "computationnel" (c'est-à-dire non spécifiquement destinée à l'analyse de données).

Cet outil pourrait servir de base pour initialiser, populariser et structurer cette approche auprès de la communauté des neurosciences, avec comme objectif de proposer à moyenne échéance une interface intuitive sur laquelle certaines idées où des schémas computationnels pourraient être testés sans connaissance en programmation multi-tâche. Cette interface prendra en pratique la forme d'un outil de simulation et d'analyse des données piloté par une interface "web". Cet outil pourrait également servir à la formation des étudiants, avec une idée d'unification des outils et le développement d'un langage de description commun basé sur des normes de description standardisés.

'''Définition et réalisation du projet '''

Actuellement, la limite principale aux validations d'hypothèses scientifiques en neurosciences computationnelles est la capacité à traduire et valider ces idées sous la forme d'un code informatique. Le projet "Émergencein computo" vise à regrouper des acteurs marseillais autour de l'émergence de nouvelles stratégies computationnelles dans des circuits neuro-mimétiques. Si chacun des acteurs provient d'une discipline différente, l'étude de leur thématique respective montre qu'il est naturel de les regrouper autour de ce thème commun. L'approche que nous considérons la plus productive est alors de structurer la recherche menée par ces acteurs grâce à une infrastructure commune afin de stimuler la production de résultats computationnels de plus large envergure. En effet, cette mise en commun de moyens pour un groupe de travail issu d'équipes différentes permettra de développer des recherches transversales à l'interface de la théorie de l'information, du traitement du signal et de la modélisation. Aborder ces problématiques différentes sur un outil commun favorisera le rapprochement de points de vue entre disciplines séparées : utilisation de méthodes probabilistes communes à l'analyse des données et à la modélisation, utilisation de description dynamiques similaires pour les points de vue macro-, méso- et micro-scopiques, mise en commun de méthodes non biaisées pour l'estimation statistique de quantités d'information ou encore utilisation d'une formalisation et de codes inter-échangeables. Le financement de ce projet est donc essentiel pour ouvrir ces perspectives: * donner rapidement les moyens de travailler ensemble grâce à un outil de calcul puissant,

  • faire émerger des collaborations inter-disciplinaires autour d'un "langage" commun: Il permettra de dialoguer avec un langage de programmation, des librairies et une terminologie communes, en collaboration avec l'initiative NeuralEnsemble,

    • donner une tribune pour cette initiative : En particulier, l'organisation rapide d'une conférence nous permettra d'inviter des personnalités scientifiques qui nous aideront à définir les problématiques communes. L'atelier nous permettra lui de partager avec la communauté marseillaise mais aussi avec nos collaborateurs proches. Il aura lieu une fois le projet mûr et de façon pratique en partageant approches, techniques et résultats.

'''Présentation des thématiques par les acteurs du projet '''

Jean-Luc Blanc IR CNRS, équipe : Neurosciences Théoriques et Système Complexes

Codage neuronal et théorie de l'information: Un problème fondamental est de comprendre comment l’activité d’une population de neurones, observée dans la fréquence ou l’organisation temporelle des trains de potentiels d'action ou dans les potentiels de champs locaux, porte de l’information sur le monde extérieur. Il existe deux méthodes complémentaires pour étudier quantitativement comment le cerveau extrait les caractéristiques et déchiffre les informations encodées dans l’activité de la population neuronale : les algorithmes de décodage et la théorie de l’information. La première méthode prédit un stimulus ou comportement à partir d’un pattern de réponses neuronales. La deuxième précise la quantité d’information contenue dans l’activité neuronale à propos des stimuli, cette quantité est calculée en utilisant le formalisme de la théorie de l’information de Shannon. L'étude des relations statistiques entre les réponses corticales et les stimuli est souvent réalisée dans le cadre de la théorie de l'information pour quantifier l’information transmise par les réponse neuronale par rapport à un ensemble des stimuli. Cette approche a notamment l'avantage de permettre de définir un ensemble optimal de stimuli (ou de représentations neuronales) qui maximise l’information mutuelle entre les stimuli et les réponses. Une procédure adaptative permet de déterminer ces ensembles de manière itérative (Blahut-Arimoto, 1972). Indicateurs pour les systèmes complexes et dynamique de séquences symboliques:Les études expérimentales du système nerveux impliquent l'enregistrement de l'évolution temporelle de l'activité corticale, qui sont comparables à des séquences de symboles. En suivant ce point de vue, le système nerveux, qu'il soit chaotique ou non, est capable de générer des messages et peut donc être considéré comme une source d'information. En s'inspirant de l'idée de Kolmogorov de caractériser les systèmes dynamiques par des quantités comme l'entropie, il est possible d'estimer cet indicateur à partir de signaux expérimentaux provenant de différentes échelles d'observation (EEG, LFP, spikes). Cependant l'estimation d'un tel index (asymptotique) est souvent biaisé par la quantité de donnée limitée et par la structure de corrélation des données. Certaines approches algorithmiques permettent de contourner cette limitation.

** Andrea Brovelli** CR1 CNRS, équipe: Dynamique Neuronale et Apprentissage

Les humains et les singes ont une capacité remarquable à apprendre de nouvelles relations arbitraires entre un stimulus visuel, une action et la conséquence de cette action. L'apprentissage visuomoteur arbitraire est une forme de conditionnement instrumental (ou opérant) qui nous permet d'apprendre les conséquences de nos actes dans un contexte donné (par exemple, ne pas toucher une plaque électrique lorsqu'elle est allumée). Cette fonctionnalité nous assure une grande capacité d'adaptation face aux situations nouvelles et nous permet également de développer des habitudes robustes lorsque le contexte est stable. De plus, certains comportements pathologiques, tels que les désordres compulsifs obsessionnels, et, plus vraisemblablement les addictions, sont étroitement liés à cette faculté cognitive. La compréhension des principes fondamentaux et de leurs implémentations neurales représente un défi important pour les neurosciences cognitives modernes. Mon objectif est de comprendre comment le cortex frontal et les ganglions de la base régissent l'apprentissage instrumental. Plus précisément, on cherchera à identifier le rôle fonctionnel des boucles fronto-striatales et caractériser leur dynamique d'activation au cours de l'apprentissage. Les travaux sont menés à la fois chez le primate humain et non-humain, en s'attachant à intégrer les connaissances issues de ces deux espèces, grâce en particulier à l'application chez l'homme de tâches comportementales développées chez le singe. Notre approche combine les données comportementales et neurales enregistrés à différents niveaux dans le cerveau (activité unitaire, LFPs, EEG intracrânien, IRMf) avec des modèles computationnels de l'apprentissage. Le but à long-terme est d'élucider les liens entre la plasticité cérébrale au cours de l'apprentissage à différents niveaux d'analyse du neurone simple aux réseaux cérébraux.

Emmanuel Daucé Enseignant-chercheur, maître de conférences à l'école centrale de Marseille, équipe: Contrôle et apprentissage des déplacements finalisés.

Nous considérons l'étude analytique ou par simulation des comportements collectifs de populations de neurones. L'étude analytique vise à estimer les comportements attendus des grands ensembles de neurones en fonction des paramètres macroscopiques définissant les catégories de liens entre populations. Différents régimes dynamiques peuvent ainsi être définis, ainsi que des grandeurs dites de "champ moyen" fournissant une description concise de l'activité d'une population entière. Le travail de simulation vient en complément pour aborder des questions pour lesquelles il est plus difficile d'effectuer des prédictions, comme par exemple lorsque l'on considère l'effet de la plasticité synaptique. Dans ce cas, il est fait appel à des concepts et méthodes venus de l'apprentissage automatique (apprentissage de politiques sur la base de signaux de récompenses, codage par fonctions noyaux), appliquées à des dispositifs de contrôle et réseaux de neurones biologiquement inspirés. Des simulations massives servent alors à valider les schémas proposées qui doivent obéir à la double contrainte d'être efficaces (performance accrue au cours de l'apprentissage) et réalistes (en particulier respecter la contrainte de localité de l'information, ce qui exclut de nombreux schémas classiques utilisant des information "off-line" et non locales).

Anna MontagniniCR2 CNRS, équipe: Dynamique de la Perception Visuelle et de l'Action

Je m'intéresse au contrôle visuo-oculomoteur chez les sujets humains en tant que modèle idéal de prise de décision dans des conditions simplifiées et bien contrôlées au niveau expérimental. J'utilise une approche couplée entre expérimentation (psychophysique et analyse des mouvements oculaires à haute résolution) et modélisation (représentation probabiliste de l'information, inférence, théorie de la décision). En particulier, dans le cadre d'un processus simple de décision visuo-oculomotrice (c.f. poursuite oculaire d'une cible en mouvement vers une direction randomisée à chaque essai), je m'intéresse à l'étude de la représentation interne de l'information a priori et de son incertitude. Par information a priori on entend ici l'information préalable à l'observation du stimulus sensoriel qui détermine la réponse motrice correcte: il s'agit donc d'une information prédictive. Dans les expériences, l'information a priori est manipulée statistiquement, de manière à introduire un biais de probabilité dans le type de réponse requise et donc créer des attentes «asymétriques». Ces attentes (ou représentation interne du Prior) se traduisent dans une variable comportementale mesurable, les mouvements d'anticipation de Poursuite Oculaire, qui permettent d'étudier la dépendance du Prior de la statistique de l'entrée sensorielle, ainsi que l'évolution dynamique de cette représentation interne.

Laurent PerrinetCR2 CNRS, équipe: Dynamique de la Perception Visuelle et de l'Action

  • Un problème fondamental en neurosciences est de comprendre comment l'information locale représentée sur le champ récepteur des neurones peut permettre de voir l'émergence d'une perception ou d'un décision comportementale qui soit globale. Je m'intéresse à relier des méthodes de disciplines a priori éloignées (probabilités, physique statistique, informatique, neuroscience) pour proposer des solutions à ce problème. Appliqué à la vision, nous étudions en particulier des stratégies d'intégration spatio-temporelle en les confrontant à des données d'imagerie ou comportementales. Celle-ci sont comparées à des solutions utilisant des représentations distribuées probabilistes qui sont optimales au sens de la théorie de l'information. Elles permettent en particulier d'expliquer comment le système visuel peut intégrer des informations dynamiques, bruitées et souvent ambiguës en utilisant des stratégies inférentielles, comme par exemple par la propagation d'informations prédictives. Ces systèmes sont éminemment contraints par la dynamique et la connectivité des neurones qui les constituent. J'étudie en particulier comment relier la structure de ces systèmes dynamiques avec les fonctions qu'ils implantent. Pour cela, j'utilise des modèles d'apprentissage non-supervisés appliqués à des scènes naturelles. J'étudie alors l'émergence dans la connectivité neuronale de structures qui optimisent un coût fonctionnel. Ces modèles permettent d'étudier des catégories différentes de résultats en fonction de paramètres fondamentaux de l'entrée sensorielle -comme sa complexité par rapport à la taille du réseau- ou des neurones -comme la vitesse de conduction latérale maximale dans une aire corticale. Ce dernier exemple montre la généralité de l'effet de contraintes locales simples avec des effets macroscopiques importants et qui sont essentiels pour des stratégies de calculs parallèles. En effet, elle conditionne la synchronisation, même partielle, de l'information sur l'état des différents nœuds du système pris globalement.

Conclusion

Le projet "Émergence in computo" rassemble des acteurs de disciplines différentes mais de thématiques fortement inter-connectées. Nous pouvons identifier dans les thématiques présentées ci-dessus une approche commune centrée autour du rôle de l'apprentissage et des représentations internes. En particulier, nous voyons émerger les thèmes suivants:

  1. Modélisation des populations neuronales: codage et plasticité (Andrea Brovelli, Emmanuel Daucé, Laurent Perrinet),

  2. Analyse de l'activité neuronales à différents échelles: codage neuronal et plasticité, apprentissage supervisé (Jean-Luc Blanc, Andrea Brovelli, Emmanuel Daucé),

  3. Modélisation comportementale et représentations internes (Jean-Luc Blanc, Anna Montagnini, Laurent Perrinet).

Ce projet vise à financer les moyens computationnels et scientifiques nécessaires à la réalisation de telles perspectives. C'est pourquoi le support de Neuro-IC est essentiel à la réalisation du projet "Émergencein computo". Ces moyens sont de trois ordres:

  • une plateforme commune de calcul sous forme d'un "cluster" (voir devis inclus) : 20k€,

  • de l'animation scientifique, par l'organisation d'un conférence et d'un atelier : 7k€,

  • des moyens de fonctionnement : 3k€.

    • Le budget total de ce projet est donc de 30k€.

Neuroinformatique et neurosciences computationnelles

  • Contexte

  • Objectifs et plus-value attendue

  • Descriptif du programme

  • Enjeu scientifique interdisciplinaire

Contexte

Comprendre le cerveau reste encore à l’heure actuelle un défi majeur pour les scientifiques de toutes disciplines. Le cerveau représente la structure la plus complexe jamais construite par la nature: cent milliards (1011) de neurones connectés par un réseau d'une complexité inimaginable (1014 à 1015 connections), et qui est capable de traiter des informations très complexes en un temps record, comme l'analyse instantanée d'une scène visuelle. Ce traitement d'information se fait au travers de la mise en action simultanée de groupes de neurones qui forment des patrons d'activité spécifiques. La grande complexité du cerveau lui permet non seulement de traiter des informations complexes, mais aussi elle rend le cerveau d'autant plus vulnérable à divers dysfonctionnements, qui résultent en pathologies telles que la schizophrénie, l'épilepsie, les troubles de la mémoire, du language, etc.

La compréhension des mécanismes cérébraux dépasse donc largement la recherche fondamentale: elle possède des implications directes dans la compréhension et le traitement de pathologies. Elle possède aussi des implications directes au niveau technologique, dans la construction de machines capables de traiter l'information de façon « intelligente », tel que le traitement d'informations du monde réel, scènes visuelles, auditives, etc.

Les neurosciences computationnelles représentent une discipline relativement récente et dynamique, et dont le but affiché est de comprendre le cerveau par des moyens théoriques et informatiques. Cette discipline combine l'expérimentation avec la théorie et les simulations numériques, ce qui permet d'ouvrir toute une série de possibilités nouvelles au niveau scientifique et d'applications technologiques. La neuroinformatique concerne plus spécifiquement les aspects informatiques, tels que la conception et la réalision de méthodes d’analyse mathématiques, la constitution de bases de données en neurosciences et les outils qui s’y rapportent. Les neurosciences computationnelles et la neuroinformatique combinent donc des spécialistes d'horizons différents, tels que les biologistes, physiciens, mathématiciens, informaticiens, ingénieurs, et médecins. Ces spécialistes identifient les principes du fonctionnement cérébral, et ils formalisent ces principes sous forme de modèles théoriques qui sont ensuite testés par la simulation numérique. Ces modèles peuvent également être implémentés directement sur des circuits électroniques, dans le but de créer de nouvelles générations de calculateurs. Ils peuvent aussi être utilisés comme outil pour investiguer les dysfonctionnements du cerveau, en particulier dans le cas où les pathologies résultent d’interactions multiples.

haut de page

Mais plutôt que de représenter des domaines séparés, les neurosciences théoriques et expérimentales fonctionnent souvent ensemble, de façon synergique. Aux USA et en Europe, il existe de nombreux centres où les laboratoires expérimentaux et théoriques se côtoient, comme les centres Bernstein allemands ou Gatsby anglais, le Brain & Mind Institute et l’Institute for Neuroinformatics en Suisse, le RIKEN Institute au Japon, et les nombreux centres américains (Keck, Sloan, Swartz centers, etc) [Pour une liste des centres de neurosciences computationnelles, et leurs coordonnées sur Internet, voir : http://home.earthlink.net/~perlewitz/centers.html]. La France est plus timide à ce niveau, avec plusieurs unités INSERM ou CNRS qui combinent les expertises théoriques et expérimentales, mais aucun institut ou centre plus ambitieux n’a encore pu voir le jour (cfr. Faugeras, Samuelides & Frégnac, A future for systems and computational neuroscience in France ? J. Physiol. Paris 101 : 1-3, 2007).

À l’image de cette interaction théorie/expérience, de nombreux projets Européens ont vu le jour, et certains de ces projets ont une renommée internationale. Il faut noter l’existence de programmes spécifiquement inter-disciplinaires, comme le programme Future and Emerging Technologies (FET) de la Communauté Européenne, et qui vise à subventionner des projets pluri-disciplinaires, ambitieux et innovants. De nombreux projets de neurosciences, alliant la théorie et l’expérimentation, avec des nouvelles technologies, ont été subventionnés par ce programme. En particulier, des projets récents tels que FACETS, DAISY et SECO consistent à allier l’expérimentation biologique, pour caractériser les neurones et les circuits neuronaux, avec des approches théoriques pour formaliser ces principes biologiques, et ensuite l’ingénierie pour implémenter ces modèles sur des circuits intégrés. Il en résultera de nouvelles générations de circuits intégrés qui fonctionneront de façon analogue aux circuits neuronaux réels. Ces circuits pourront être utilisés pour tester des principes biologiques, et aider à l’exploration des propriétés des circuits neuronaux, suggérer de nouvelles expériences, etc, la boucle est bouclée. Une des réalisation de ces projets a été la conception de circuits intégrés contenant un grand nombre de neurones de type intègre-et-tire, qui permettront la simulation (analogique) de réseaux de centaines de milliers de neurones, avec une vitesse de calcul de 100,000 fois plus rapide que le temps réel, une performance qui dépasse celle des plus gros calculateurs parallèles !

Même si des groupes Français occupent une place importante dans des projets tels que FACETS et DAISY, il faut déplorer l’absence de programmes ambitieux à l’échelle nationale. Plusieurs actions ont vu le jour (ACI neurosciences computationnelles, programmes CTI et neuroinformatique, par exemple), et elles ont mené à des projets intéressants, mais leur budget limité n’a pas permis de vraiment structurer la communauté théorique et computationnelle en neurosciences. Réaliser une telle structuration, et la stabiliser, nécessiterait de mettre sur pied un réseau d’excellence avec un budget important et des postes pour les nombreux jeunes chercheurs du domaine. Par exemple, l’inititative récente des Bernstein Centers en Allemagne a permis de structurer le domaine de façon très significative en créant plusieurs centres, et de nombreux postes de chercheurs. Aucune initiative de cette envergure n'a encore pu voir le jour en France.

Objectifs et plus-value attendue

L'objectif du programme Neuro-IC est double :

  • de soutenir des actions fortement interdisciplinaires comme exposé ci-dessous. Le but de ce soutien est de jouer un rôle de tremplin vers la réalisation et l'élaboration de projets ambitieux qui combinent différentes disciplines, comme la biologie, la physique, l'ingénierie et l'informatique;

  • d'identifier différentes équipes fortes dans le domaine et qui formeraient le noyau d'un éventuel futur réseau d'excellence dans le domaine des neurosciences computationnelles et de la neuroinformatique.

Descriptif du programme

Le programme Neuro-IC soutiendra des projets de recherche fondamentale et de recherche appliquée sur des problématiques liées aux Neurosciences, abordées de manière interdisciplinaire avec la participation significative de chercheurs de disciplines telles que les Mathématiques, la Physique, l’Informatique, la Robotique ou le Traitement du signal. Une attention particulière sera donnée aux projets à l'interface neurosciences/sciences humaines. Le but du programme est en particulier de soutenir des actions interdisciplinaires qui constituent des projets aux idées radicalement nouvelles, de préférence entre partenaires qui n’ont jamais collaboré, et/ou jamais contribué à ce champ de recherche. Les projets qui comportent un facteur de risque substantiel sont particulièrement encouragés. Typiquement, le programme soutiendra des actions à caractère exploratoire et dont le niveau de risque (et l’absence de données préliminaires) interdisent l’écriture d’un projet de type ANR ou européen. Le programme servira donc de tremplin vers l’élaboration de projets plus ambitieux – cet aspect fondateur sera particulièrement important dans l’évaluation des projets.

Il n’y a pas de restriction thématique pour autant que les projets allient clairement les neurosciences avec au moins une autre discipline, dans le cadre d'un projet de nature théorique, numérique ou d’ingénierie. A titre d'exemples de thèmes, on peut mentionner l’étude de la relation structure-fonction dans les réseaux neuronaux (lien entre connectivité et comportement), l’étude de la dynamique d’émergence d’états pathologiques, l’étude du codage neuronal, de l’attention ou de la cognition, ainsi que la conception de nouveaux types de calculateurs inspirés de l’architecture du cerveau, des projets de robotique bio-inspirée, ou encore des projets alliant expérimentation et modélisation sur des thèmes issus des sciences humaines et sociales. Le programme soutiendra les thèmes traditionnels de la neuroinformatique, tels que la constitution de bases de données en neuroscience, ou la conception de nouvelles méthodes d’analyse de données. L’aide à la conception et/ou l’étude de faisabilité de nouvelles techniques expérimentales en neuroscience (par exemple nouvelles techniques d’imagerie) sera également soutenue, pour autant que ce type d’étude soit exploratoire et fondateur. Enfin, l'application de nouvelles méthodes de la physique théorique aux neurosciences est encouragée.

Les budgets demandés seront typiquement du fonctionnement, de l’équipement et des missions, de l’ordre de 30,000 Eur. Le programme ne pourra pas financer de salaire. Il est important qu’il y ait une adéquation entre le projet demandé et le budget (les « recyclages » de projets antérieurs ne seront pas évalués). L'usage envisagé de la somme demandée doit faire l'objet d'un budget détaillé et clairement motivé (une page maximum).

Les demandes devront faire l'objet d'une présentation scientifique courte, 5 pages maximum (sans annexe, références incluses), complétée d'un CV bref des partenaires principaux (une page maximum). Les aspects exploratoires et interdisciplinaires doivent être explicités (ils constituent les critères principaux d’acceptation, en plus de l’excellence scientifique du projet). Chaque projet sera examiné par 2 ou 3 rapporteurs de disciplines différentes.

L’appel à projet sera publié début janvier, avec une date limite de soumission début février. Ceci permettra de financer les projets retenus en mars de l’année d’acceptation. Les subventions accordées, utilisables pour toute dépense à l'exception de salaires ou vacations, seront à dépenser avant le 31 décembre de la même année.

À l’issue du projet, il sera demandé aux auteurs de rédiger un rapport court (de l'ordre de 5 pages) sur les résultats obtenus au cours du projet et les développements qu’il a contribué à réaliser (publications, soumission de projet ANR ou Européen, démarrage d’autres projets plus ambitieux, etc).

Enjeu scientifique interdisciplinaire

Le rôle majeur de ce programme est de favoriser, par le rapprochement Neurosciences-Neuroinformatique, une meilleure dynamique dans l’approche de la complexité du système nerveux. Déjà opérationnelle dans quelques grands centres (Bernstein, Gatsby, Brain & Mind Institute, Institute for Neuroinformatics, RIKEN Institute, Keck, Sloan, Swartz centers, etc), cette approche contribue au développement de la recherche fondamentale mais aussi, dans des pathologies chroniques, graves et fréquentes (maladies neurodégénératives, paraplégie, douleur, maladies mentales) à la définition de nouvelles stratégies thérapeutiques (prothèse, robot, nanotechnologie et neurostimulation, réalité virtuelle et troubles de la représentation du corps dans l’autisme…). Cette approche est également indispensable dans la conception de nouvelles architectures de calcul, inspirées du cerveau.

Le programme Neuroinformatique et Neurosciences Computationnelles peut aussi être vu comme une étape préliminaire et nécessaire à un plan d’action structurant plus ambitieux à venir, et dont la mise en place dépendra de l’ambition scientifique des institutions concernées).

The original eve

  • one common statement in popular science when speaking about evolution is that we all derive from a common ancestor, the "original eve". while a posteriori, it is true that mitochodrondrial DNA allows to trace back common ancestors in our heredity, it is certainly overstated. When thinking evolution, our focus is to look back from the present to our origins, but a the time of these "few" original eves, many different eves coexisted and acted -as parts of the whole population- in the evolution.

  • Evolution is more like a Banyan tree than a graph written on a board where species evolve from primitive forms into more and more complex forms, including an idealistic vision of our position on this tree.

  • even worse for the gap between reality and popular preconceptions about evolution: evolution is also lateral. More on this subject @ Horizontal and vertical: The evolution of evolution

width=100%

contributing to the python community

(or "would be hard to give as much as I took :-)" )

which or that?

From http://www.businesswritingblog.com/business_writing/2006/01/that_or_which.html

That usually introduces essential information in what is called a "restrictive clause." Which introduces extra information in a "nonrestrictive clause."
  1. Example from a recent email:

    • "I am offering a new class, Email Intelligence, that/which may be an excellent fit for your training needs and budget."

    • Does the clause (in red) introduce information that is essential to knowing which Email Intelligence class?

    • No. The clause provides extra information, so which is correct.

  2. Revised example:

    • "Among my new programs, I am offering a class that/which may be an excellent fit for your training needs and budget."

    • Does the clause (in red) introduce information that is essential to knowing which class?

    • Yes. The clause tells which class--a class that may be an excellent fit. Therefore, that is correct.

convert a bitmap image to a vectorized PDF using mkbitmap and potrace

Warning

This post is certainly obsolete...

  • it' a snap to install using MacPorts

    $ port info potrace
    potrace @1.8 (graphics)
    Variants:             a4_default, metric_default, universal
    
    Description:          Potrace is a utility for tracing a bitmap, which means, transforming a bitmap into a smooth, scalable image. The
                          input is a bitmap (PBM, PGM, PPM, or BMP), and the default output is one of several vector file formats. A
                          typical use is to create EPS files from scanned data, such as company or university logos, handwritten notes,
                          etc. The resulting image is not jaggy like a bitmap, but smooth. It can then be rendered at any resolution.
    Homepage:             http://potrace.sourceforge.net/
    
    Library Dependencies: zlib
    Platforms:            darwin
    License:              unknown
    Maintainers:          nomaintainer@macports.org
    manga:~ lup$ port variants potrace
    potrace has the variants:
       a4_default: compile potrace with A4 as the default page size.
       metric_default: compile potrace with centimeters as the default unit  instead of inches.
       universal: Build for multiple architectures
  • to install

    sudo port install potrace +a4_default +metric_default
  • check man pages and open your input for inspection

    man mkbitmap
    man potrace
    open dubout.png
  • you can use directly this workflow

    convert dubout.png ppm:- | mkbitmap -f 2 -s 2 -t 0.48 | potrace -t 5 --progress -b pdf -o dubout.pdf
  • but convert being what it is, first do

    convert dubout.png dubout.ppm
  • to take some more time fine tuning parameters:

    cat dubout.ppm | mkbitmap  -t 0.48 | potrace -t 15 --progress -b pdf -o dubout.pdf
  • in particular, the -x option resets defaults:

    cat dubout.ppm | mkbitmap  -x -s 2 -3 -t 0.5 | potrace -t 25 --progress -b pdf -o dubout.pdf
  • wait and ... enjoy!

Please Note
This is a code snippet.

inkscape native

Warning

This post is certainly obsolete...

  • build dependencies

    sudo port install autoconf automake
    
    sudo port install librsvg libwpd libwpg libcroco
    
    sudo port install libxslt boost boehmgc gtkmm lcms intltool popt
    
    sudo port install cairo +quartz+no_x11 cairomm pango +quartz+no_x11 poppler +quartz gtk2 +quartz
    
    sudo port install gsl
    
    sudo port install hicolor-icon-theme
    
    sudo port install subversion
    
    sudo port install libxml2 libxslt
    
    # optional to speed up the compiling process:
    sudo port install ccache
    export CC="ccache gcc"
    export CXX="ccache g++"
  • getting the sources

    cd tmp
    svn co https://inkscape.svn.sourceforge.net/svnroot/inkscape/inkscape/trunk inkscape
    cd inkscape/packaging/macosx/
  • compile

    # Edit the file osx-build.sh to remove the configure option --enable-osxapp
    # (line 24)
    
    # I used TextWrangler for this, pico or another command line editor will do the same.
    
    # Back to the terminal:
    
    # configure it:
    sudo ./osx-build.sh c
    
    # build it:
    sudo ./osx-build.sh b
    
    # install it:
    sudo ./osx-build.sh i
    
    # test it:
    ../../Build/bin/inkscape

    compiles ok :-), but crashes rather rapidly :-(

some unix tips

  • find / -nouser te donneras tous les fichiers dont le nom du propriétaire est inexistant de la table /etc/passwd

  • batch converting

    for draw in `find /path/to/wiki/data -name \*.draw`; do
        file=`dirname $draw`/`basename $draw .draw`
        if [ -e "${file}.gif" ]; then
            echo "Converting ${file}.gif to ${file}.png"
            convert "${file}.gif" "${file}.png"
        fi
    done
  • You may always pipe the output of commands through grep to find specific words, but it can also be used to find files that contain a text string:

    grep -lir "some text" *

    The -l switch outputs only the names of files in which the text occurs (instead of each line containing the text), the -i switch ignores the case, and the -r descends into subdirectories.

  • compression

    in : tar cvf nom.tar dir/*
    out : tar xvf nom
    liste : tar tvf nom
  • du -sk : occupation des fichiers sur un disque

  • a2ps archives/unix.txt -Prsi2 imprime joliement...

  • dos2unix fic1 fic2 transforme fic1 en fichier texte unix nomme fic2

  • Pour connaitre le taux d'occupation de l'UC d'une machine, utiliser la commande sar. Exemple : sar -u 1 5 donne la consommation UC pendant 1s et 5 fois de suite

MoinMoin: howto install a new theme

  • locally

    scp Downloads/moniker18_2.1.1.zip  perrinet@195.221.164.4:/var/www/moin/perrinet/data/plugin/theme/tmp
    
    .. TEASER_END
  • on the server

    cd /var/www/moin/perrinet/data/plugin/theme/
    export USER=www-data
    export GROUP=www-data
    export INSTANCE=/usr/share/moin/htdocs/moniker
    unzip moniker18_2.1.1.zip
    cd moniker18_2.1.1
    cat read\ me\ on\ installing.txt
    cp -r moniker /usr/share/moin/htdocs/
    cp moniker18.py ../../
    chgrp -R $GROUP $INSTANCE
    chgrp -R $GROUP ../../moniker18.py
    vim ../../../../../../perrinet.py # set moniker18 as default

Tips on Filesystems, security and al on mac os x

  • mount with AFP sharepoints from the command line (thanks to this hint) :

    # mount_afp [-i] [-o options] afp_url node
    mkdir /Volumes/truc
    mount_afp afp://user:PASSWORD@server/truc /Volumes/truc
    umount /Volumes/truc
    rmdir /Volumes/truc

finder nags when changing a file's extension

  • read the default value (should be 0)

    defaults read com.apple.finder FXEnableExtensionChangeWarning
  • change it:

    defaults write com.apple.finder FXEnableExtensionChangeWarning False

Make spell check show only desired languages

HFS+

  • Mac OS X Filesystems

  • One feature of HFS volumes is that fle are referred to as links: for instance you can read a PDF file while changing the name at the same time. No problem!

  • Safely remove '._' files created by HFS(+)

    find . -name '._*' -print0 | xargs -0 rm

Spotlight

  • to remove the remaining index files on the volume using the command: `` sudo mdutil -E /Volumes/volume_name `` utilitaires CLI :

    sudo mdutil --help
    mdutil: unrecognized option `--help'
    usage: mdutil -pE volume ...
            mdutil can be used to manage the metadata stores used by Spotlight.
            -p              publish metadata for the provided volumes.
            -i (on|off)     set indexing status for the provided volumes.
            -E              erase the master copy of the metadata stores for the provided volumes.
            -s              print indexing status for the provided volumes.

System & Security

nmap

Dash board

  • Don't use Dashboard? No particular reason to leave it running, consuming memory. Following http://www.macosxhints.com/article.php?story=20050723123302403, you can turn Dashboard off by doing:

    defaults write com.apple.dashboard mcx-disabled -boolean YES
    killall Dock
  • Unsurprisingly, you change YES to NO to re-enable Dashboard:

    defaults write com.apple.dashboard mcx-disabled -boolean NO
    killall Dock

Unix - X11

Change login window on (Snow) Leopard

  • disrupted by the look of the plasma flames? think it looks like a cheap star trek sundae? check http://paulstamatiou.com/2007/10/31/how-to-change-leopards-login-wallpaper :

    cd /System/Library/CoreServices
    sudo rm DefaultDesktop.jpg
    #sudo mv DefaultDesktop.jpg DefaultDesktop_old.jpg # if you want to keep the Aurora stuff (it stills around, do a 'locate Aurora'
    sudo ln -s /Library/Desktop\ Pictures/Nature/Stones.jpg DefaultDesktop.jpg

Creating Proceedings (almost) automatically using python and latex

In order to produce proceedings for the NeuroComp08 that we organized, I used a combination of LaTeX and Python to generate a PDF from our preprint server based on ConfMaster. This was due to the lack of an appropriate tool for this system and the need to be flexible to any change made in last minute by the authors. I used the following steps (these are summarized in the included Makefile file at the bottom that allowed to rebuild everything when any small change in these steps were done).

  1. First, in ConfMaster, download the papers from the system (Admistrator/Export DB/Download Files/Submit) but also all metadata in CSV format (Admistrator/Export DB/CSV Data to export/Papers). The CSV file had to be manually cleaned-up (using vim and OpenOffice) to correct character encoding and some errors from users. In fact, people had sometimes accents in their names and I found out ultimately that the most flexible way to get all accents was to translate everything to a good old latex-type of encoding.

  2. the following script body.py generated a link between the CSV and the folder of PDFs, but also generated index terms in the resulting body.tex file for the creation of the authors and keywords tables:

    1. extracting the information

      1. first, reading the CSV:

        1 # the csv module allows high-level reading of cells.
        2 import csv, os
        3 root = '.' # where you stored the CSV and the PDF folder
        4
        5 ## gather information from the CSV
        6 papers = list(csv.reader(open(os.path.join(root,'paper_Neurocomp2008.csv'), "rb"), delimiter=',', quotechar = '"'))
      2. getting the index of particular columns of interest identified in the first line papers[0] of the CSV file:

         1 def index(vector, match):
         2     for index, value in enumerate(vector):
         3         #print value
         4         if value == match:
         5             index_ = index
         6     return index_
         7
         8 index_title = index(papers[0],'Title')
         9 index_contact_author = index(papers[0],'ContactAuthor_LastName')
        10 index_author1 = index(papers[0],'CoAuthor_1_LastName')
        11 index_kw1 = index(papers[0],'Keyword1')
      3. getting the relevant data from the CSV by looping over all lines:

         1 first_author, id = [], []
         2 db = {}
         3 for paper in papers[1:]:
         4     id = int(paper[0])
         5     db.update( {id : {'contact_author':paper[index_contact_author+1] + ', ' + paper[index_contact_author] } })
         6
         7     index = index_author1 #index of the name of Contact author 1
         8     author_list = []
         9     while True:
        10         if len(paper[index])>1:
        11             author_list.append(paper[index+1] + ' ' + '{\\sc ' + paper[index] + '}')
        12         else:
        13             #print paper[index]
        14             break
        15         index += 5
        16     #print author_list
        17     db[id].update({'author_list':author_list})
        18     db[id].update({'title':paper[index_title]})
        19
        20     keywords, index_kw = [] , index_kw1
        21     while (index_kw < index_kw +5):
        22         kw = paper[index_kw]
        23         #print kw
        24         if (kw == ''):
        25             break
        26         else:
        27             keywords.append(kw)
        28         index_kw += 1
        29     db[id].update({'keywords':keywords})
      4. identify relevant papers using the name of the PDF which contains its ID:

         1 ## link the db with the collection of papers retrieved by the export db feature of confmaster
         2 paper_directory = os.path.join(root,'NEUROCOMP2008Submissions_final')
         3
         4 paper_list = os.listdir(paper_directory)
         5
         6 for paper in paper_list:
         7     if paper.find('.pdf') > -1:
         8         conf, id_str, md5 = paper.split('_')
         9         id_list = int(id_str)
        10         #print id_list, paper
        11         db[id_list].update({'pdf':paper} )
      5. remove some:

        1 ## exclude some papers (rejected / not participating)
        2 list_excluded = [50,57,18,44]
        3 for id in list_excluded:
        4     print ' * Removing ', db[id]['title'], ' from ',  db[id]['author_list']
        5     del db[id]
      6. sorting data

        1 # sorting the dictionary by contact_author: (see http://code.activestate.com/recipes/52306/)
        2 items=db.items()
        3 backitems = [ [v[1]['contact_author'],v[0]] for v in items]
        4 backitems.sort()
        5 sortedlist=[ backitems[i][1] for i in range(0,len(backitems))]
      7. and manually include the program:

         1 program=[{'Cortical treatments':[56,16]},
         2             {'Neuron models':[67,39,15,27,47,64,32, 58,48,21,54]},
         3             {'Neural fields and attractor networks':[31,43,8,65]},
         4             {'Computational vision':[19,77,13,41,11,12,38,40]},
         5             {'Biophysical models':[46,9,51,52,59]},
         6             {'Action selection': [22,20,74,37]},
         7             {'Connectionnist models':[6,72]},
         8             {'BMI and signal processing':[42,70,49,60,63,66,45,7,10,14,76,33,75]},
         9             {'Population coding':[61,35,68,26,36,53]},
        10             {'Plasticity and  functional specialization':[69,62,29,5,34,24]},
        11             {'Network dynamics':[28,25,23,73]},
        12             {'Neural interfaces and softwares':[55,71,30]}]
    2. We begin to write the file:

      1. first, the script opens the file and writes a header (I'm using TexShop):

        1 # write the header
        2 fic = open('body.tex','w')
        3 # write the includes for all papers
        4 fic.write("""%!TEX TS-program = pdflatex
        5 %%!TEX encoding = Latin1
        6 %!TEX root = neurocomp08proceedings.tex
        7 """)
      2. Define the templates of latex commands

         1 MODEL_include = """\includepdf[pages=-,%saddtotoc={1,subsection,2,%s,%s}]{%s}
         2 """
         3 MODEL_index_first = """\index{author}{%s|bb}
         4 """
         5 MODEL_index = """\index{author}{%s}
         6 """
         7 MODEL_index_kw = """\index{keyword}{%s}
         8 """
         9 MODEL_section = """
        10 \\refstepcounter{section}
        11 \\addcontentsline{toc}{section}{%s}
        12 """
      3. Define a function to correctly write th author list

         1 def make_author_list(author_list):
         2
         3     if len(author_list)==1:
         4         s= author_list[0]
         5     else:
         6         s= author_list[0]
         7         if len(author_list)>1:
         8             for author in author_list[1:-1]:
         9                 s +=  ', ' +  author
        10         s += ' and ' + author_list[-1]
        11     return s
      4. Main loop

         1 for themes in program:
         2     print (themes.keys()[0])
         3     fic.write(MODEL_section %(themes.keys()[0]))
         4     for id in themes.values()[0]:
         5         try:
         6             for i_author, author in enumerate(db[id]['author_list']):
         7                 if i_author == 0: fic.write(MODEL_index_first %(author))
         8                 else: fic.write(MODEL_index %(author))
         9             for kw in db[id]['keywords']:
        10                 fic.write(MODEL_index_kw %(kw))
        11
        12             # some papers were not vertically centered, correcting that manually
        13             option = '' # default option
        14             if id == 55: option =' offset = 0 -1cm, '
        15             if id == 65: option =' offset = 0 -1.9cm, '
        16             if id == 13: option =' offset = 0 -2cm, '
        17             if id == 40: option =' offset = 0 -1cm, '
        18             if id == 70: option =' offset = 0 -2cm, '
        19             if id == 62: option =' offset = 0 -1cm, '
        20             if id == 29: option =' offset = 0 -2.5cm, '
        21
        22             if id == 73: option =' offset = 0 1cm, '
        23             if id == 55: option =' offset = 0 -1cm, '
        24             if id == 70: option =' offset = 0 -2cm, '
        25
        26             #print db[id]['title'] + ', ' + db[id]['author_list']
        27             titre = '{\\bf ' + db[id]['title'] + '} by \\emph{' + make_author_list(db[id]['author_list']) + '}'
        28             fic.write(MODEL_include %(option, titre,id,os.path.join(paper_directory,db[id]['pdf']) ))
        29         except:
        30             print ' /!\\ Paper ', db[id], ' has no pdf!'
      5. Closing the file

        1 fic.close()
  3. once this file is created, you may include it in a traditional proceedings latex file neurocomp08proceedings.tex:

    1. Defining the classes: In particular, we use pdfpages and multind.

      %!TEX TS-program = pdflatex
      %!TEX encoding = ISO Latin 1
      %!TEX root = neurocomp08proceedings.tex
      \documentclass[twoside,a4paper]{article}%,draft
      \usepackage[applemac]{inputenc}%
      %
      \usepackage[final]{pdfpages}%
      \usepackage[pdftex, pdfusetitle ,colorlinks=false,pdfborder={0 0 0},pdftitle={Proceedings of the second french conference on  Computational Neuroscience: NeuroComp08}]{hyperref}%
      %
      \usepackage{makeidx}%,showidx}
      \usepackage{multind,multicol} % http://www.cs.ubc.ca/local/computing/software/latex/local-guide/node24.shtml
      \makeindex{author}%
      \makeindex{keyword}%
      %\renewcommand{\indexname}{List of authors}
      \newcommand{\bb}[1]{{\bf #1}} % to make first author bold
      %
      \usepackage{color}%
      \setlength\fboxsep{3pt}%
      %
      % Support for adding page headers and footers
      \usepackage{fancyhdr}
      %% Set the top and left margins so that the header hugs the to right corner of the paper
      %\topmargin -70pt
      %\oddsidemargin -70pt
      % Commands for adding headers and footers
      \pagestyle{fancy}
      %\fancyhead{} % clear all header fields
      %\fancyhead[RO,LE]{\sectionmark}
      \fancyfoot{} % clear all footer fields
      %\renewcommand{\sectionmark}[1]{\bfseries\markboth{\thesection.\ #1}{}}
      \renewcommand{\sectionmark}[1]{\markboth{#1}{}}
      \fancyfoot[LE,RO]{\thepage}
      \fancyfoot[LO,RE]{\colorbox{white}{Proceedings  of the second french conference on  Computational Neuroscience:  NeuroComp08}}
      \renewcommand{\headrulewidth}{0.2pt}
      \renewcommand{\footrulewidth}{0.4pt}
      %\setlength\textwidth{15cm}
      \setlength\headwidth{18.5cm}
      \setlength\textheight{25.85cm}
      %\setlength\hoffset{1cm}
      \topmargin=-1.95cm
      %\usepackage[a4paper,hmargin=1cm,vmargin=1cm]{geometry}
      %\usepackage[a4paper]{geometry}
    2. Begin the document by including the cover as a one-page PDF (converted from a SVG in the Makefile below)

      \begin{document}
      \includepdf[pages=-]{affiche_NeuroComp.pdf}
      \newpage
      
      \includepdfset{pages=-,pagecommand=\thispagestyle{fancy}}
      \newpage
    3. Including a page with the BibTex entry and the ISBN number (using macro file ean13.tex)

      %%  FRONTMATTER:
      %
      %%\emptyheads
      \thispagestyle{empty}
      \include{titlepage}
      %\frontmatter
      %\newpage
      %\setcounter{page}{3}
      %\pagestyle{fancy}
      \pagestyle{empty}
      \subsection*{How to cite this proceedings book?}
      \begin{verbatim}
      @proceedings{NeuroComp08,
               Title = {Proceedings of the second french conference on
                           Computational Neuroscience, Marseille},
               Editor = {Laurent U. Perrinet and Emmanuel Dauc{\'e}},
               Isbn = {978-2-9532965-0-1},
               Url = {http://2008.neurocomp.fr},
               Month ={October},
               Year = {2008}}
      \end{verbatim}
      \vfill
      \begin{flushright}
      \input ean13
      \ISBN 978-2-9532965-0-1 %
      \vspace{2cm}
      \EAN 978-29-532965-0-1
      \end{flushright}
      \newpage
      \pagestyle{empty}
      \setlength{\parskip}{1ex plus 0.3ex minus 0.3ex}
      \setlength{\parindent}{1em}
    4. Some verbose introduction, see also titlepage.tex:

      \subsection*{Introduction}
      Ce recueil contient les actes de la seconde conférence française de neurosciences computationnelles qui s'est tenue à Marseille du 8 au 11 octobre 2008.
      
      Les neurosciences computationnelles portent sur l'étude des processus de traitement de l'information dans le système nerveux, du niveau de la cellule jusqu'à celui des populations de neurones et du contrôle du comportement. Le but de cette conférence est de rassembler des chercheurs issus de différentes disciplines, incluant les neurosciences, les sciences de l'information, la physique statistique ou encore la robotique, afin d'offrir un large panorama des recherches menées dans le domaine.
      
      Ce recueil présente les 68 contributions qui ont été présentées lors de la conférence, dans leur ordre d'apparition dans le programme. Le premier jour était consacré aux modèles de la cellule neurale, aux modèles des traitements visuels et corticaux, ainsi qu'aux modèles de réseaux de neurones bio-mimétiques. La seconde journée était consacrée aux interfaces cerveau-machine, à la dynamique des grands ensembles de neurones, à la plasticité fonctionnelle et aux interfaces neurales.
      
      Cette conférence a été rendue possible grâce au soutien de nombreuses institutions, et nous tenons à remercier le CNRS, la Société des neurosciences, Le conseil régional de la région Provence Alpes Côte d'Azur, le conseil général des Bouches de Rhône, la mairie de Marseille, l'université de Provence, l'IFR "Sciences du cerveau et de la cognition", et l'INRIA. Nous remercions chaleureusement la faculté de médecine de Marseille et l'université de la Méditerranée qui nous ont hébergés pendant tout le déroulement de la conférence.
      
      Les organisateurs de la conférence remercient les membres du comité scientifique et du comité de lecture, les auteurs des différentes contributions ainsi que tous ceux qui ont contribué au bon déroulement de ces journées.
      
      
      {\it This proceedings book contains the contributions that were presented at the second french conference on Computational Neuroscience that was held in Marseille from October 8th to 11th, 2008.
      
      Computational neuroscience is the study of the mechanisms governing the processing of information in the nervous system, from the cellular level to the population of neurons and behaviour control. The aim of this conference was to gather people from various fields, including neuroscience, information science, statistical physics or robotics, in order to give a large panorama of the ongoing research in the field.
      
      This book presents the 68 contributions which have been presented at the conference, with respect to their order of appearance in the conference program. The first day was devoted to the modelling of neural cells, to visual and cortical treatments and realistic neural networks models. The second day was devoted to brain-machine interfaces, large-scale and dynamical models, functional plasticity and neural interfaces.
      
      This conference has been made possible with financial support from the CNRS, the French Society of Neuroscience,  the regional council of Provence and of Bouches-du-Rhône, the city of Marseille, the university of Provence, the IFR "Sciences du Cerveau et de la Cognition" and the INRIA. It was kindly hosted by the Marseille medicine faculty and the University of the Mediterranean. We are grateful to all these supporting organizations for helping us gathering the computational neuroscience community in Marseille.
      
      The organizers of this conference would like to thank the scientific committee members and reviewers, the authors of the submitted papers and all those who have helped with which we could provide you the best conditions possible.
      }
      
      \vfill
      \noindent Laurent Perrinet and Emmanuel Daucé\hfill October, 2008
      \newpage
    5. Table of Contents

      %%%%%%%%TOC%%%%%%%%%%%%%%%%%%
      \pagestyle{empty}
      \oddsidemargin=2cm
      \evensidemargin=2cm
      \tableofcontents
      \newpage
    6. Including the above generated body.tex file

      %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
      %   MAINMATTER --  Section by Section
      \pagenumbering{arabic}
      \setcounter{page}{1}
      \oddsidemargin=-1cm
      \evensidemargin=5cm
      \input body %_static
    7. Finally, include both index:

      %%%%%%%%%%%%%%%   Author and Subject Index
      \oddsidemargin=2cm
      \evensidemargin=2cm
      \printindex{author}{Author Index}
      \printindex{keyword}{Keyword Index}
    8. And close the book:

      \thispagestyle{empty}
      %\includepdf[pages=-,pagecommand={\thispagestyle{empty}},addtotoc={1,section,1,{\bf Presentation of the INCF} by \emph{{\sc Chatzopoulou}, Elli},8}]{INCF_Neurocomp08.pdf}%
      \includepdf[pages=-,pagecommand={\thispagestyle{empty}}]{INCF_Neurocomp08.pdf}%
      \end{document}
  4. A Makefile eased debugging and flow control:

    latexfile = neurocomp08proceedings
    
    default: $(latexfile).pdf
    
    pdf: $(latexfile).pdf
    
    body.tex: paper_Neurocomp2008.csv body.py
            python body.py
    
    %.eps: %.png
            convert $< $@
    
    %.eps: %.jpg
            convert $< $@
    
    affiche_NeuroComp.pdf: affiche_NeuroComp.svg PACA3-coul_N_.pdf SdN.png  LogoMarseille.png LogoCnrs.png
            inkscape affiche_NeuroComp.svg -A affiche_NeuroComp.pdf
    
    
    $(latexfile).pdf: $(latexfile).tex body.tex titlepage.tex ean13.tex affiche_NeuroComp.pdf
            pdflatex  $(latexfile)
            makeindex keyword.idx
            makeindex author.idx
            pdflatex $(latexfile)
            while ( grep -q '^LaTeX Warning: Label(s) may have changed' $(latexfile).log) \
                    do pdflatex $(latexfile); done
            while ( grep -q 'Rerun to get citations correct.' $(latexfile).log) \
                    do pdflatex $(latexfile); done
    
    
    clean:
            rm -f $(latexfile).out  $(latexfile).pdf $(latexfile).log titlepage.aux \
                    $(latexfile).aux $(latexfile).toc  body.tex keyword.ilg author.ilg \
                    $(latexfile).ind author.idx keyword.idx author.ind keyword.ind
  5. and voilà!

List Of Symbols

symbols

% Math-mode symbol & verbatim
\def\W#1#2{$#1{#2}$ &\tt\string#1\string{#2\string}}
\def\X#1{$#1$ &\tt\string#1}
\def\Y#1{$\big#1$ &\tt\string#1}
\def\Z#1{\tt\string#1}

% A non-floating table environment.
\makeatletter
\renewenvironment{table}%
   {\vskip\intextsep\parskip\z@
    \vbox\bgroup\centering\def\@captype{table}}%
   {\egroup\vskip\intextsep}
\makeatother

% All the tables are \label'ed in case this document ever gets some
% explanatory text written, however there are no \refs as yet. To save
% LaTeX-ing the file twice we go:
\renewcommand{\label}[1]{}

%%end-prologue%%
\begin{table}
\begin{tabular}{*8l}
\X\alpha        &\X\theta       &\X o           &\X\tau         \\
\X\beta         &\X\vartheta    &\X\pi          &\X\upsilon     \\
\X\gamma        &\X\gamma       &\X\varpi       &\X\phi         \\
\X\delta        &\X\kappa       &\X\rho         &\X\varphi      \\
\X\epsilon      &\X\lambda      &\X\varrho      &\X\chi         \\
\X\varepsilon   &\X\mu          &\X\sigma       &\X\psi         \\
\X\zeta         &\X\nu          &\X\varsigma    &\X\omega       \\
\X\eta          &\X\xi                                          \\
                                                                \\
\X\Gamma        &\X\Lambda      &\X\Sigma       &\X\Psi         \\
\X\Delta        &\X\Xi          &\X\Upsilon     &\X\Omega       \\
\X\Theta        &\X\Pi          &\X\Phi

\end{tabular}
\caption{Greek Letters}\label{greek}
\end{table}



\begin{table}
\begin{tabular}{*8l}
\X\pm           &\X\cap         &\X\diamond             &\X\oplus     \\
\X\mp           &\X\cup         &\X\bigtriangleup       &\X\ominus    \\
\X\times        &\X\uplus       &\X\bigtriangledown     &\X\otimes    \\
\X\div          &\X\sqcap       &\X\triangleleft        &\X\oslash    \\
\X\ast          &\X\sqcup       &\X\triangleright       &\X\odot      \\
\X\star         &\X\vee         &             &\X\bigcirc   \\
\X\circ         &\X\wedge       &              &\X\dagger    \\
\X\bullet       &\X\setminus    &            &\X\ddagger   \\
\X\cdot         &\X\wr          &          &\X\amalg     \\
\X+             &\X-
\end{tabular}

\caption{Binary Operation Symbols}\label{bin}
\end{table}



\begin{table}
\begin{tabular}{*8l}
\X\leq          &\X\geq         &\X\equiv       &\X\models      \\
\X\prec         &\X\succ        &\X\sim         &\X\perp        \\
\X\preceq       &\X\succeq      &\X\simeq       &\X\mid         \\
\X\ll           &\X\gg          &\X\asymp       &\X\parallel    \\
\X\subset       &\X\supset      &\X\approx      &\X\bowtie      \\
\X\subseteq     &\X\supseteq    &\X\cong        &    \\
  & &\X\neq         &\X\smile       \\
\X\sqsubseteq   &\X\sqsupseteq  &\X\doteq       &\X\frown       \\
\X\in           &\X\ni          &\X\propto      &\X=            \\
\X\vdash        &\X\dashv       &\X<            &\X>            \\
\X:
\end{tabular}

\caption{Relation Symbols}\label{rel}
\end{table}

setting graphics' path

  • instead of using

    \includegraphics[width=\textwidth]{folder2_relative/picture.png}%
  • by including in the front matter (i.e. before \begin{document}):

    \DeclareGraphicsExtensions{.png,.pdf}%
    \graphicspath{{../folder1_relative/},{folder2_relative/},{/home/myname/folder_absolute/figures/}}%
  • you may simply use

    \includegraphics[width=.49\textwidth]{picture}%
  • one advantage is that you could use context dependent rules, for instance:

    \newif\ifpdf
       \ifx\pdfoutput\undefined \pdffalse
    \else \pdfoutput=1 \pdftrue \fi
    % portability between LaTeX and pdfLaTeX
    \ifpdf
    \usepackage[pdftex]{graphicx}
    \usepackage[pdftex, pdfusetitle ,colorlinks=false, pdfborder={0 0 0}]{hyperref}%
    \DeclareGraphicsExtensions{.png,.pdf}%
    \graphicspath{{figures_pdf/}}%
    \pdfoutput=1 % we are running pdflatex
    \pdfcompresslevel=9     % compression level for text and image;
    \pdftrue
    % we are using the traditional latex
    \else
    \usepackage{graphicx}%
    \usepackage[colorlinks=false]{hyperref}%
    \DeclareGraphicsExtensions{.eps}%
    \graphicspath{{figures_eps/}}%
    \fi

some LaTeX tips: drafts, links, margins, pdflatex

More \LaTeX tips...

checking typographic style

managing margins

  • to adjust margins, use

    \usepackage[margin=2.5cm]{geometry}

    then play around with the 2.5cm value until it fits.

  • tips for fitting your text in the required size : LaTeX Tips n Tricks for Conference Paper

citations

  • If you give LaTeX \cite{fred,joe,harry,min}, its default commands could give something like "[2,6,4,3]"; this looks awful. One can of course get the things in order by rearranging the keys in the \cite command, but who wants to do that sort of thing for no more improvement than "[2,3,4,6]"

    • The cite package sorts the numbers and detects consecutive sequences, so creating "[2-4,6]". The natbib package, with the numbers and sort&compress options, will do the same when working with its own numeric bibliography styles (plainnat.bst and unsrtnat.bst).

    • If you might need to make hyperreferences to your citations, cite isn't adequate. If you add the hypernat package:

      \usepackage[...]{hyperref}
      \usepackage[numbers,sort&compress]{natbib}
      \usepackage{hypernat}
      ...
      \bibliographystyle{plainnat}

      See for example http://www.tex.ac.uk/cgi-bin/texfaq2html?label=citesort

Useful draft tips

  • “LaTeX and Subversion”

    • set a keyword with ``svn propset svn:keywords "Id" index.tex `` so that every occurrence of

      System Message: WARNING/2 (<string>, line 70); backlink

      Inline literal start-string without end-string.

    • use latex-svninfo http://www.ctan.org/tex-archive/macros/latex/contrib/svninfo/ for instance with

      \usepackage[fancyhdr,today,draft]{svninfo}%
      %\usepackage[fancyhdr]{svninfo}%
      \pagestyle{fancyplain}
      \fancyhead{}
    • now at every commit $Id$ will be replaced by useful data that will show up in the foot of the page

    • for a reference on using keywords see http://wiki.loria.fr/wiki/Variables_automatiques

using pdfLaTeX

  • but... sites like arXiV use only plain LaTeX so that you should keep the 2 versions of your directives for better portability (see \ifpdf ...)

    • in particular arXiV rejects the microtype package

  • le package hyperref permet même de faire des références vers les différents chapitres.

  • PDfLaTeX ne permet pas d'inclure des eps pour cela il faut les convertir en pdf avec epstopdf ou le script suivant qui permet de convertir tous les .eps d'un dossier (à sauver et rendre executable):

    for f in $* ;do
        if echo "$f" | grep -i eps*   ; then
             epstopdf --nocompress $f
             echo "converting  $f to pdf ..."
        else
        echo "$f is not a eps file, ignored"
        fi
    done

    il suffit alors d'executer en console `` ./mon_script la-ou-ya-tout-mes-eps/*.eps ``

    System Message: WARNING/2 (<string>, line 116); backlink

    Inline emphasis start-string without end-string.

Installation TeX

Some useful bits of \LaTeX code accumulated over the years...

count number of words / compter le nombre de mots

  • Pour compter le nombre de mots et de caractères d'un document latex, il suffit d'installer deTeX et de lancer la simple ligne de commande

    detex MonFichier.tex | wc -w
  • alternatively, you may use

    pdftotext MonFichier.pdf - | wc -w
  • in TexShop there's a "Statistics..." interface to the same technique.

  • on MacOsX, to install appropriate tools, use MacPorts and

    sudo port install detex
    sudo port install xpdf +a4 +with_poppler

including source code in a document with pretty printing

  • use

    \usepackage{attachfile}

    or

    \usepackage{filecontents}
  • see documentation:

    texdoc attachfile

referring to table or image

  • referring to table or image (and not to the bottom of it)

    \usepackage{hypcap}

framed box

  • make a framed box around text (and configure space) :

    \setlength\fboxsep{1pt}
    \setlength\fboxrule{0.5pt}
    \fbox{text}

convert a collection of JPGs to a pdf

\listfiles
\documentclass{minimal}
\usepackage{graphicx}
\usepackage[active,graphics,tightpage]{preview}
\begin{document}
\includegraphics{pic1}
\includegraphics{pic2}
\includegraphics{pic3}
\end{document}
  • or

for f in *.jpg ; do convert $f `basename $f .jpg`.pdf ; done
  • or `` slideshow.tex`` TeX file

___________________________________________________________
\pdfcatalog{/PageMode/FullScreen}\pdfcompresslevel=0
\pdfhorigin0pt\pdfvorigin0pt
\def\process#1 {\setbox0\hbox{\pdfximage width 20cm {#1}%
  \pdfrefximage\pdflastximage}%
  \pdfpagewidth=\wd0 \pdfpageheight=\ht0 \shipout\box0\par}
\everypar{\setbox0\lastbox\process} \input dir \end
___________________________________________________________
Usage:
ls *.jpg > dir
pdftex slideshow

more fonts

  • on the mac, out of the box with i-installer

    The gwTeX part of this distribution contains all the setup files you need to use a couple of fonts from your Mac. The setup has been created by Thomas A. Schmitz (he did the main work) and Adam Lindsay, hence the naming: gtamacfonts.
    To use these fonts with LaTeX, put e.g. the following in your file:
            \usepackage[T1]{fontenc}
            \usepackage{gtamachoefler}
    Such a style file will make Hoefler Text the serif (roman) text font and Gill Sans the sans serif font. The following basic styles are available:
            gtamacbaskerville.sty
            gtamacdidot.sty
            gtamacgeorgia.sty
            gtamachoefler.sty
    There are  more. See the manual for details. For the same effect using ConTeXt, enter e.g.:
            \usetypescriptfile[type-gtamacfonts]
            \usetypescript[Hoefler][ec]
            \setupbodyfont[Hoefler,12pt]
    Example documents and a manual can be found in the texmf.gwtex/doc/fonts/gtamacfonts subdirectory. To get the manual you can type "texdoc gtamacfonts" in a Terminal window.
  • Latin Modern

    \usepackage[T1]{fontenc}
    \usepackage{lmodern}

Installation TeX

Tex on MacOsX

  • !TexLive is the most recent /easy distribution. You may add new packages easilly in $HOME/Library/texmf (see a reference) or using the TexLive tool: tlmgr

  • to install :

    wget http://ftp.klid.dk/ftp/texlive/tlnet/mactex-2009-sept-20.mpkg.zip
    unzip mactex-2009-sept-20.mpkg.zip
    sudo installer -pkg MacTeX-2009.mpkg -target /

    (check before on http://ftp.klid.dk/ftp/texlive/tlnet/ the correct name)

  • I had to set up a new source repository :

    sudo tlmgr option location http://ftp.klid.dk/ftp/texlive/tlnet
  • to upgrade

    sudo tlmgr update --self
    sudo tlmgr update --all

duplicate files

You may find yourself overwhelmed by files and in the need to keep the filesystem organized. If deleting is the best option, you may consider these 2 options:

Dupinator

Dupinator, tries to find duplicates and to report them in order to clean-up the organization of your files.

changelog

It works by:

  • launched via command line by passing a set of directories to be scanned

  • traverses all directories and groups all files by size

  • scans all sets of files of one size and checksums (md5) the first 1024 bytes

  • for all files that have the same checksum for the first 1024 bytes, checksums the whole file and collects together all real duplicates

  • deletes all duplicates of any one file, leaving the first encountered file as the one remaining copy

Impact Factor

Most researchers nowadays are judged based on their publication list and ---as a shortcut--- quantifically rated by their cumulative Impact Factor. How efficient is this method?

  • This paper studies the assumptions underlying the journals' impact factor and the open access initiative:

    • M. Taylor, P. Perakakis, Varvara Trachana. The siege of science , URL . Ethics in Science and Environmental Politics, 8(1):17--40, 2008 .

keyboard

Touches spéciales (unix) sur clavier Mac francais

Merci a http://busy.lab.free.fr/mac/ !

backslash

\

shift + option + /

pipe, or

|

shift + option + L

tidle

~

option + N

simple quote

'

4 key, that's to say a regular ' on any keyboard I guess.

opening brace

{

option + (

closing brace

}

option + )

opening square bracket

[

shift + option + (

closing square bracket

]

shift + option + )

Apple also got the habit to give it's own special names to some keys:

  • ctrl is called, well, control. So far, so good.

  • the alt key (with a funy sign under it) is called option.

  • the key with a funny sharp and the Apple logo is called command (for we are supposed to be so creative Apple users I think), or sometimes more logically apple.

keyboard shortcuts

  • in Finder (and most other applications)

    • ⌃⌘Z to zoom the window

    • ⌘ + ` to switch between your windows

    • ⇧⌘[ and ⇧⌘] to switch tabs

    • ⌃⌘D to show the dictionary

  • in Firefox:

    • Fn + Alt + Left arrow gets you to the home page.

    • Command + L : link bar

    • Command + K: search box

    • Ctrl + Tab: Next Tab

  • see Mac OS X keyboard shortcuts

Deliverable M9-3: Workshop for definition of a detailed version of the V1 hypercolumn model

The INCM was holding a workshop on the V1 hypercolumn model on the 22nd and 23rd of October 2007. The purpose of the workshop was to promote the coordination of modeling efforts within the consortium and in particular to organize collaboration in WP9T2 and associated work-packages. It is labeled as deliverable M9-3: "Workshop for definition of a detailed version of the V1 hyper-column model" for WP9-2 (workpackage 9 task 2), but is also linked by its subject to WP5. In contrast to the previous meeting, we started with brief presentations of the results from each group to expose the different efficient aspects of each model. This allowed us in the second half of the workshop to converge to some main issues and prioritize the neural features that are the most important for the efficiency of V1. As for the format, we proposed that PhD students and post-docs should have the opportunity to present this work to give them experience and reduce the burden on busy chiefs. We felt it was important to involve as many in this workshop as possible as we approach a crucial stage in integrating the different models.

Program

Afternoon of Monday, 22 October 07: Framework

12:30-14:00

Registration and Lunch

14:00-18:00 (First session)

Introduction: coordination of modeling efforts for WP9T2

14:00-14:10

Guillaume Masson (INCM): Welcome

14:10-14:30

  • Laurent presented some examples of using the benchmark structure proposed in NeuroTools and the new SpikeList object. The benchmark will be the privileged way of spreading the different benchmarks to all partners and is a practical way of describing a benchmark as a list of experiments, storing the result of the experiments, distribute and run the experiments on a cluster and finally for plotting figures. In a second part, Laurent presented the SpikeList object as an answer to the difficulty of finding a common format for describing lists of spikes. Some experimental example and an interface with PyNN was shown as a proof of concept. Jan suggested that the internal representation could be sparse. This should be easily done by using the format methods (See SpikeList).

14:40-15:10

  • Andrew and Jens presented a system's approach to modeling the visual system. Partners in FACETS need to exchange knowledge on their modeling progress and this approach allowed to construct a model from composite parts (retina, LGN, V1) from the “LEGO bricks” of the different groups. A practical example with INRIA/INCM retina and a INCM V1-layer 4 was shown. An LGN brick is to be developed. Brick for higher areas (MT) can also be developed.

15:20-16:00

Coffee

16:00-16:20

  • Andrew proposed a format specification for benchmarks : 1) the input is provided as zipped PNG files, storing is done in the FACETS knowledge base (every stimulus has a unique URL), 2) A specification scheme for writing benchmarks and a whole experiment was presented with a proposed format using a XML interface.

16:30-16:50

  • Adrien presented his physiologically realistic retina, emphasizing on the contrast gain control. His model uses the INRIA simulator but with an open architecture and specification compatible with the FACETS specification. The retina uses a feed-back on the bipolar cells to achieve realistic contrast gain control which was compared with classical experiments (Shapley and Victor, 79, Enroth and Cugell XX). A possible extent is to study the importance of the spike profile at image onset which reveals first the luminance image and then the edges (Enroth Cugell 83 Bernadete-Kaplan 99). He also approached a mean field approach valid for gratings.

17:00-17:20

  • Klaus presented the retina used at the TUG. It is based on a spatio-temporal decorrelation model from Dong and Attick (1995) for the linear part and on a gamma renewal process (Gazères, 1998) for the spiking part. It is available to FACETS partners in the SVN1 and will converted to the format specified in the VisualSytem class (See https://www.kip.uni-heidelberg.de/repos/FACETSCOMMON/facetsmodel/LGN/trunk/graz ).

17:30-18:00

General Discussion

  • conclusions on benchmarking progress and decisions about their future development and use.

  • We concluded on benchmarking progress and decided about their future development and use. In particular, the unification proposed by the Benchmark class and the SpikeList object from Laurent and the specification proposed by Andrew seemed to fit to the needs of the partners and we agreed to use the proposed schemes.

  • We agreed on further definition of the visual benchmarks WP9T2-VisionBenchmarks (while respecting the standards from FACETS_Benchmarks). This will be implemented in a coming deliverable.

  • The question if we can reach the goal a single, common framework for multiple V1 models in FACETS was left open due to the wide variety of approaches in the consortium. However, the discussion suggested some collaborations of groups on some specific scientific questions.

Morning of Tuesday, 23rd of october 07

09:00-13:00 (Second session)

Progress in Modeling V1

  • 09:00-09:20*

  • Mike described current progress towards a neural model of motion perception in V1/MT, a collaborative project with CNRS/INCM. The model is based on the recurrent neural circuitry between a V1 hypercolumn and area MT, and uses a Kalman-Bucy approach to estimate the velocity of the moving object by integration of local motion information. The model will be suitable for testing using the FACETS visual motion benchmark stimuli, in particular the CNRS/INCM data on motion integration for smooth eye pursuit.*

09:30-09:50

  • The KTH model was presented by Jan, with the latest changes and additions and some preliminary results. HH neuronal models, hierarchical columnar structure. The horizontal connectivity is specified by the LISSOM model while the vertical connectivity is inspired by biology.*

10:00-10:20

  • Klaus presented the TUG model with the latest changes and some preliminary results. He illustrated the use of Izhikievitch neurons, the data from Alex Thomson (and not from Binzegger) by showing some results of the columnar model in particular by studying the influence of NMDA.*

10:30-10:50

  • Jens presented progress of the INCM/ALUF cooperation emphasizing on progress in the specification of the thalamo-cortical projections. He presented results of applying the different benchmarks proposed to reveal the functional properties of the model. It emerged that the inhibition scheme used allowed a normalization of the input at the network level. This was put into evidence using the spike-triggered conductance profile which were consistent with some recent data from Rudolph et al. (2007).*

11:00-11:30

Coffee

11:30-13:00

  • Round-table discussion: questions, problems, and issues of modeling V1. Moderator: G. Masson

  • Retina model: shouldn't we use a more standardized input? How should we set up background noise? Biologists need to specify what they intend by background (on-going) activity to be simulated in large scale neural networks.

  • Back to back cooperation between experimentalists and modelers for defining benchmarks in connections with dissemination of experimental data.

  • Link with more high-level task, as the one presented by Mike and in WP9T3 or the work done in collaboration INCM-INRIA. Since they use the same benchmarks (such as motion integration), these approaches can better defined the computational rules to be implemented in large scale neural networks. One good example in the role of asymmetric diffusion of information/activity in the network, thanks to feedback from higher areas.

13:00-14:00

Lunch

Afternoon of Tuesday, 23rd of october 07

14:00-16:30 (Third session)

Scientific questions coming from the Biology of V1

14:00-14:20

  • Julian presented a review of different results on the physiology and anatomy of the cat retino-thalamo-cortical projections. In fact, though the question of the architecture of these projections was questioned during the workshop, little is known with general agreement. Correlating different works from the literature, a detailed quantification was reviewed suggesting a disagreement between anatomy and physiology. Presenting the work of Ringach (2004), he concluded by showing own simulations suggesting that the properties and diversity of the receptive field of cat's area 17 simple cells may be captured by a wiring scheme based on the specific quantization of the parameters of the retino-thalamo-cortical pathway.*

14:30-14:50

  • Cyril reviewed different models of the emergence of orientation and direction selectivity before emphasizing on the results of different groups on the role of conductance profile in this function. This revealed a diversity of behavior between push-pull model where inhibitory and excitatory profiles are in overlap and other configuration This was put in light with results obtained at the UNIC.*

15:00-15:20

  • Alex presented preliminary results of center-surround interactions using VSD optical imaging in the primate V1 cortex. In the retinotopic position of the center the response to the center appears with decreasing latency for increasing contrast. The response of the 80% contrast surround reaches the center at a latency equivalent to approx. 15% contrast, leaving open the question of the interaction of these two information streams. Preliminary results show suppression for high contrasts but facilitation for low center contrast. Further analysis (of latency, propagation) suggests the functional role of horizontal propagation in this configuration.*

15:30-16:00

Coffee

16:00-16:30

General Discussion Moderator: Yves Fregnac (UNIC)

16:30-17:30

Outcome: Planning of Implementation plans / priorities for WP9T2.

  • Several actions need to be taken. 1) We keep the idea of one annual meeting on V1 modeling. The meeting shall be held in June instead of October, to prepare for Annual reports and implementation plan. We will post a call for the organization of the 2008 meeting. We should also try to bring more biologists in these meetings. 2) We will organize a phone/video conference once every 3 months to exchange information and compare outputs for each benchmark steps. 3) We shall provide a timeline for delivering benchmark tools and objectives, as well as deadline for collecting results for testing models. Such a timeline will be added to the D9-2 in which we will describe the different benchmarks. 4) We will set a discussion list on the FACETS Wiki website to propose new questions from modelers to biologist and vice-versa. The idea is to put information for which there is a general agreement rather than having an on-going forum. Answers shall be concise, with reference to published work and or available data. 5) We shall promote active collaboration between sites with the objectives of common publication of one specific aspects of visual tasks to be develop in FACETS (on-going activity, local cortical point spread function ….)

Questions

Below are some questions modelers (please add to this list) would like the biologists at the meeting to answer during the round-table discussion (11:30-13:30 on 23 October):

  • What is the purpose of the Y-type pathway input to layer 4 of cat area 17?

  • Is the tuning of cortical neurons dynamic or not (e.g. for orientation)?

  • Can simple or complex cortical cells be directionally selective but untuned for orientation?

  • Are inhibitory neurons in cortex generally tuned for orientation or not?

  • Do inhibitory fast-spiking (FS) neurons have a higher spiking threshold than excitatory regular-spiking (RS) neurons?

  • Is modelling corticothalamic feedback essential for models of V1?

  • What common mistakes do modellers make that annoy you and fellow biologists the most?

  • How can modellers best help the biologists?

For the sake of fairness, we would also like some questions from the biologists for modellers to answer during the same discussion session.

Organization

When and where?

The date for the workshop is Monday, 22 and Tuesday, 23 October 07. It will take place at the INCM in Marseille as last meeting (see Marseille_November2006 ).

Who is attending

Please https://facets.kip.uni-heidelberg.de/internal/jss/AttendMeeting?meetingID=28 register your attendance.

  • monday lunch 16 people (everybody except Anders Lansner)

  • monday dinner 15 people (everybody except Anders Lansner and Andrew Davison)

  • tuesday lunch 16 people (everybody except Andrew Davison)

facilities

we will have

  • a beamer

  • no internet connection (ask if you need one)

  • lunch and coffee breaks!

  • ( the video conferencing system was not needed anymore)

more info

Post-doctoral Position in Computational Neuroscience: "Functional, Large-scale Models of Visual Motion Perception"

Warning

The position has been filled

We are currently inviting applications for a postdoctoral position in computational neuroscience to study functional, large-scale models of visual motion perception. The post is for up to 3 years in the DyVA team at the INCM (CNRS) in Marseille, France and will be funded within the european FACETS consortium.

The project will involve developing, designing and implementing large-scale neural networks of the primary visual areas. The goal is to bridge theoretical principles of computations, physiological data (e.g. optical imaging) and behavioral data (e.g. eye movements) to bring new understanding on the neural computations underlying the perception of motion. We are particularly interested in the dynamics of motion integration, focusing on responses to complex motion stimuli in areas V1 and V5/MT. Our theoretical approaches span from statistical inference to adaptive distributed representations to provide insights into these parallel, dynamical processes. We use state-of-the-art simulation software to validate these results thanks to our computer facilities, with the goal of transferring this technology to aVLSI chips developped within the consortium.

The DyVA team is a young CNRS pluridisciplinary research team (Head: Dr. Guillaume Masson) integrating research on eye movements and visual perception at both behavioral (psychophysics, motor control) and physiological (optical imaging, electrophysiology) levels. The computational project will be conducted by Dr. Laurent Perrinet in close collaboration with other FACETS teams (Pr. O. Faugeras, INRIA,Sophia-Antipolis, Pr. Ad Aertsen, Freiburg, Pr. W. Maass, Graz, Pr. Y. Fregnac, CNRS, Paris). The work conducted in Marseille will involve a PhD Student and a Computer Engineer together with the post-doctoral fellow. Close interactions within the team with the biological tasks conducted (optical imaging in awake monkey) are promoted.

The postdoc will work as part of a European wide research team based on a new EU-funded Integrated Project entitled "FACETS: Fast Analog Computing with Transient States". The FACETS project is a major 11 million Euros, four-year research project funded by the European Union. The stated objective of FACETS is to explore and exploit the computational principles that constitute the basis of information processing in the brain. The project involves all facets of neuroscience from experimental neuroscience ("in vivo"), the construction of models and analytical descriptions for neural cells and networks ("in computo"), to the construction of very large scale neural circuits in VLSI technology ("in silico"). The FACETS consortium includes fifteen of the major laboratories in Europe in these areas. One of the goals of the FACETS consortium is to develop a large-scale model of the primary visual cortex and explore its computational capabilities for solving low-level motion tasks involving short- and long-range lateral interactions.

Expertise in computational modeling as well as mathematical and programming skills are required. A keen interest in neuroscience is necessary to ensure the tight coupling of the computational approach with experiments done in the lab. The position is available immediately, the appointment will be for a fixed term of up to three years, ending 31st August 2009.

Marseille is a major french town by the mediterranean sea with a lively atmosphere and cultural life. The DyVA team is located in a CNRS campus with excellent research facilities. Net salary is about 1800€/month, full social coverage included.

Interested candidates should send their application materials - cover letter, CV, statement of research interests and research experience (no .doc file, please) or informal enquiries to Dr. Laurent Perrinet ( mailto:Laurent.Perrinet@incm.cnrs-mrs.fr ).

This work was supported by European integrated project FP6-015879, "FACETS".

V1 hypercolumn Coordination Meeting, 20th - 21st Nov 2006

Subject: Coordination meeting of the WP9T2 and WP5T3 tasks.

  • The goal of the meeting was to prepare the next deliverable D25 ("model of an hyper-column") but also to join our efforts in modeling. In particular, important decisions were made toward finding canonical parameters (structure, neural parameters) of all systems being delivered in WP5T3 and WP9T2 but also concerning the definition of the common benchmark that will de deployed to the different implementations in the different partners. From this benchmark (benchmark zero), we should be able to validate different solutions to pinpoint their strengths and weaknesses.

Main points and decisions

  • review / demonstration of coding strategies by different partners. This definition has impact on the "Meta Simulator",

  • review of existing V1 models inside and outside FACETS (see also !Review_of_V1_models),

  • definition of the common benchmark FACETS_Benchmarks to qualitatively compare different models and strategies in order to break the model complexity barrier, as we began to do for the simulator,

  • definition of the architecture of the FACETS model of an "hyper-column" by defining the priorities of different partners: effect of including different neuron types, including layers, specific lateral connectivity.

  • Partner 13 (Plymouth, Mike Denham) will provide an hypercolumn for D25.

  • Partner 6b will provide a retina before Dec. 15 for implementing benchmark zero.

Program

(actual program changed slightly)

Monday, 20 November 2006

Monday, 20 November 2006

morning, 09:00 - 13:00

Panel discussion : Reviewing essential knowledge from biology

* Christophe Lamy - Alex Thomson (ULON) : Diversity in the architecture
(layering, cell types and dynamics) of cortical columns
* Fred Chavane (INCM) : Essentials of the functional architecture
and cortical lateral connectivity
* Cyril Monier (UNIC) : Models of V1 and orientation selectivity
(also see
Round-table : what shall we keep for modeling/hardware ? what are the
priorities ? Moderator: A. Lansner

13:00

Lunch

afternoon

Visual benchmarking for FACETS V1 models

* Laurent Perrinet
(INCM) : Short description of visual function benchmarks for FACETS
* Jens Kremkow (INCM): Contrast gain control with layered neural
network
* Guillaume Masson (INCM): Using IO data for tuning models ?
* Adrien Wohrer (INRIA) : (CANCELED) Statistics of inputs for
benchmarking V1 models)Round-table : Implementation plans for
benchmarking. Moderator : O. Faugeras

Tuesday, 21 November 2006

Tuesday, 21 November 2006

morning, 09:00 - 13:00

Overview of V1 models from FACETS: diversity and common strengths

* Mike Denham (UP) : The english model
* Anders Lansner (KTH) : The swedish model
* Malte Rasch / Wolfgang Maass (TUG) : The austrian model
* Outcome : Definition of a FACETS common model of V1. Moderator : L.
Perrinet

13:00

Lunch

afternoon

Planning of Implementation plans * benchmarking the existing
models
* implementing the FACETS model for D25
* link between WP9 and WP5 on the track to the "detailed model" of
V1 computing.
Outcome : to set deadlines and implementation plans. Moderator : G.
Masson

venue

who, when

  • complementary to the FACETS Web Site here are the timetable for participants (opportunities to share a taxi, there?):

  • Olivier Faugeras will attend on monday (no hotel)

  • Andrew Davison will arrive on Tuesday, leaves wednesday afternoon (hotel on tuesday)

  • Cyril Monier (hotel from sunday to wednesday)

  • Olivier Marre Pierre YGER will arrive on Sunday, leave wednesday morning (hotel from Sunday to Tuesday)

  • Anders Lansner + Martin Rehn arrive Sunday evening and leave Tuesday (flight around 5 pm). (hotel from sunday to monday)

  • Malte Rasch will arrive on Sunday 19.11 (hotel from sunday to monday)

  • Christophe Lamy will arrive on Sunday @ 11.30pm leave monday (hotel on sunday)

  • Michael Denham arives monday at 2pm (st charles), leaves tuesday at 18.09 (st charles) - (hotel on monday)

craac

CRAAC 2005

2005-10-24

Récapitulatif de votre saisie : ce document sera soumis au visa de votre directeur

Compte rendu annuel d'activité des chercheurs du CNRS Année 2004 - 2005

Identité

Nom (nom de jeune fille) PERRINET Prénom Laurent Date de naissance 23/02/1973 Grade CR2 N° d'agent QAF195447 Téléphone 04 91 16 43 64 Télécopie 04 91 16 43 64 Adresse électronique laurent.perrinet@incm.cnrs-mrs.fr Section(s) du Comité National 7 Département scientifique Sciences de la vie Délégation régionale Provence Affectation

Intitulé de l'unité Institut de neurosciences cognitives de la méditerranée - INCM Code unité UMR6193 Directeur Driss BOUSSAOUD Adresse électronique du directeur driss.boussaoud@incm.cnrs-mrs.fr Adresse 31 Chemin Joseph Aiguier

  • 13402 MARSEILLE CEDEX 20

Téléphone 04 91 16 43 18 Télécopie 04 91 77 49 69 Délégation Provence Site Web Distinction(s)

Qualification

Habilitation à diriger des recherches non Doctorat d'Etat non Doctorat oui Année d'obtention 2003 Qualification "Maître de conférences" non Qualification "Professeur" non Période d'inactivité

Mobilité(s) antérieure(s)

Activités de recherche développées

Rattachement à(aux) activité(s) de recherche de l'unité UMR6193

Intitulé d'activité Date fin du rattachement

  • DYNAMIQUE DE LA PERCEPTION VISUELLE ET DE L'ACTION (DyVA)

Mots clés des sections/CID du Comité national

Section 1 : Équations aux dérivées partielles, optimisation, contrôle, théorie du signal Section 1 : Probabilités, processus et algorithmes stochastiques, statistique, analyses des données Section 7 : Intelligence artificielle : raisonnement, décision, apprentissage Section 7 : Modélisation, analyse, commande et supervision des systèmes dynamiques Section 27 : Neurosciences comportementales et cognitives normales et pathologiques, homme et modèles animaux, imagerie cérébrale fonctionnelle Section 27 : Modélisation des processus cognitifs et neurosciences computationnelles Publication(s), parue(s) ou sous presse, dans des revues à comité de lecture

Référence FISCHER, S.; REDONDO, R.; PERRINET, L.; CRISTOBAL, G. (2005), «Sparse gabor wavelets by local operations.», Proceedings SPIE, volume 5839 of Bioengineered and Bioinspired Systems, 75 86 PERRINET L. Feature detection using spikes : the greedy approach. Journal of Physiology, Paris (special issue) - Sous presse PERRINET L., SAMUELIDES S., THORPE S.,. Coding static natural images using spiking event times: do neuron cooperate?. IEEE Transactions on Neural Networks. . . . 15. . . 1164-1175. . PERRINET L.,. Finding independent components using spikes: A natural result of hebbian learning in a spike coding scheme. Natural Computing . . . . 3. . . 159-175-. . PERRINET L.;. Emergence of filters from natural scenes in a sparse spike coding scheme. Neurocomputing. . . . 58-60. . . 821-826. . PERRINET, L; (2005), «Efficient source detection using integrate-and-fire neurons.», In W. Duch et al., editor, ICANN 2005, Lecture Notes in Computer Science, volume 3696, pages 167-72 Rafael Redondo, Sylvain Fischer, Laurent Perrinet, and Gabriel Cristobal. Simple cells modeling through a sparse overcomplete gabor wavelet representation based on local inhibition and facilitation. In Gustavo Linan-Cembrano Ricardo A. Carmona, editor, Perception, volume 34 of Bioengineered and Bioinspired Systems II, page 238, August 2005. Publication(s), parue(s) ou sous presse, dans des revues sans comité de lecture

Ouvrage(s) ou chapitre(s) d'ouvrage(s), paru(s) ou sous presse

Participation à des manifestations scientifiques

Manifestation Dynn - ACI Temps et Cerveau Type de manifestation ( national ) Lieu Cannes ( FRANCE ) Durée 3 (jour(s))

Manifestation ECVP Type de manifestation ( international ) Lieu La Corogne ( ESPAGNE ) Durée 5 (jour(s))

Manifestation ICANN Type de manifestation ( international ) Lieu Varsovie ( POLOGNE ) Durée 5 (jour(s))

Manifestation Maths et Cerveau Type de manifestation ( international ) Lieu Paris ( FRANCE ) Durée 15 (jour(s))

Manifestation Pre-FACETS Workshop on Simulation and Computation Type de manifestation ( international ) Lieu Graz ( AUTRICHE ) Durée 4 (jour(s))

Activité éditoriale

Rapporteur/Relecteur dans des revues Informations complémentaires Advanced Concepts for Intelligent Vision Systems (Sept 20-23, 2005, University of Antwerp, Antwerp, Belgium, http://acivs.org/acivs2005/)

Rapporteur/Relecteur dans des revues Informations complémentaires Neural Information Processing

Rapporteur/Relecteur dans des revues Informations complémentaires Neural Processing Letters

Rapporteur/Relecteur dans des revues Informations complémentaires International Journal of Neural Systems (IJNS)

Séjour(s) dans d'autres laboratoires

Objet Visite et collaboration avec Bruno Olshausen Organisme Redwood Neuroscience Institute Pays ETATS UNIS Unité RNI Durée annuelle 30 (j)

Objet Collaboration avec Sylvain Fischer et Gabriel Cristobal - co-publications Organisme CSIC Pays ESPAGNE Unité Instituto de Optica Durée annuelle 30 (j)

Mission(s) sur le terrain

Formation personnelle

Collaborations

Organisme partenaire CNRS Pays FRANCE ( Europe ) Unité partenaire UMR Mouvement et Perception Intitulé ANR RETINAE Cadre de la coopération Nature de l'activité

Organisme partenaire INRIA Pays FRANCE ( Europe ) Unité partenaire Odyssee Intitulé FACETS Cadre de la coopération AUTRE - contrat europeen Nature de l'activité Participation à un réseau

Organisme partenaire SUPAERO - CERT Pays FRANCE ( Europe ) Unité partenaire ONERA Intitulé Reseau Dynn - ACI Temps et Cerveau Cadre de la coopération Nature de l'activité Participation à un réseau

Encadrement et animation scientifique

Animation scientifique Participation et organisation dans la formation interne organisee par l'INCM. Enseignement

Valorisation et partenariat

Vulgarisation

Type d'information Intitulé Type de participation Intervention en milieu scolaire Fete de la Science Participation ponctuelle Administration de la recherche