Publications 2006-2010
articles
references of full articles
Laurent Perrinet. Dynamical Neural Networks: modeling low-level vision at short latencies, URL . pages 163--225.
Frédéric Barthélemy, Laurent Perrinet, Éric Castet, Guillaume S. Masson. Dynamics of distributed 1D and 2D motion representations for short-latency ocular following, URL URL2 URL3 . Vision Research, 48(4):501--22, 2008
-
Anna Montagnini, Pascal Mamassian, Laurent U. Perrinet, Eric Castet, Guillaume S. Masson. Bayesian modeling of dynamic motion integration, URL . Journal of Physiology (Paris), 101(1-3):64-77, 2007
The quality of the representation of an object's motion is limited by the noise in the sensory input as well as by an intrinsic ambiguity due to the spatial limi- tation of the visual motion analyzers (aperture prob- lem). Perceptual and oculomotor data demonstrate that motion processing of extended ob jects is initially dominated by the local 1D motion cues orthogonal to the ob ject's edges, whereas 2D information takes pro- gressively over and leads to the final correct represen- tation of global motion. A Bayesian framework ac- counting for the sensory noise and general expectancies for ob ject velocities has proven successful in explaining several experimental findings concerning early motion processing [1, 2, 3]. However, a complete functional model, encompassing the dynamical evolution of ob- ject motion perception is still lacking. Here we outline several experimental observations concerning human smooth pursuit of moving ob jects and more particu- larly the time course of its initiation phase. In addi- tion, we propose a recursive extension of the Bayesian model, motivated and constrained by our oculomotor data, to describe the dynamical integration of 1D and 2D motion information.
-
Laurent Perrinet, Guillaume S. Masson. Modeling spatial integration in the ocular following response using a probabilistic framework, URL . Journal of Physiology (Paris), 2007
The machinery behind the visual perception of motion and the subsequent sensori-motor transformation, such as in Ocular Following Response (OFR), is confronted to uncertainties which are efficiently resolved in the primate's visual system. We may understand this response as an ideal observer in a probabilis- tic framework by using Bayesian theory (Weiss et al., 2002) which we previously proved to be successfully adapted to model the OFR for different levels of noise with full field gratings (Perrinet et al., 2005). More recent experiments of OFR have used disk gratings and bipartite stimuli which are optimized to study the dy- namics of center-surround integration. We quantified two main characteristics of the spatial integration of motion : (i) a finite optimal stimulus size for driving OFR, surrounded by an antagonistic modulation and (ii) a direction selective sup- pressive effect of the surround on the contrast gain control of the central stim- uli (Barth\'elemy et al., 2006). Herein, we extended the ideal observer model to simulate the spatial integration of the different local motion cues within a proba- bilistic representation. We present analytical results which show that the hypoth- esis of independence of local measures can describe the integration of the spatial motion signal. Within this framework, we successfully accounted for the con- trast gain control mechanisms observed in the behavioral data for center-surround stimuli. However, another inhibitory mechanism had to be added to account for suppressive effects of the surround.
B. Cessac, E. Daucé, Laurent U. Perrinet, M. Samuelides. Topics in Dynamical Neural Networks: From Large Scale Neural Networks to Motor Control and Vision, `URL <https://laurentperrinet.github.io/publication/cessac-07>`__ `URL2 <http://www.springerlink.com/content/q00921n9886h/?p=03c19c7c204d4fa78b850f88b97da2f7π=0>`__ . Springer Berlin / Heidelberg, 2007.
-
Andrew P Davison, Daniel Bruderle, Jochen Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, Pierre Yger. PyNN: A Common Interface for Neuronal Network Simulators., URL . Frontiers in Neuroinformatics, 2:11, 2008
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.
Sylvain Fischer, Rafael Redondo, Laurent Perrinet, Gabriel Crist\'obal. Sparse approximation of images inspired from the functional architecture of the primary visual areas, URL . EURASIP Journal on Advances in Signal Processing, special issue on Image Perception, :Article ID 90727, 16 pages, 2007
Sylvain Fischer, Filip Sroubek, Laurent U. Perrinet, Rafael Redondo, Gabriel Crist\'obal. Self-invertible 2D log-Gabor wavelets, URL . Int. Journal of Computional Vision, 2007
-
Nicole Voges, Laurent Perrinet. Phase space analysis of networks based on biologically realistic parameters, URL . Journal of Physiology (Paris), 104(1-2):51--60, 2010
We study cortical network dynamics for a more realistic network model. It represents, in terms of spatial scale, a large piece of cortex allowing for long-range connections, resulting in a rather sparse connectivity. We use two different types of conductance-based I&F neurons as excitatory and in- hibitory units, as well as specific connection probabilities. In order to re- main computationally tractable, we reduce neuron density, modelling part of the missing internal input via external poissonian spike trains. Compared to previous studies, we observe significant changes in the dynamical phase space: Altered activity patterns require another regularity measure than the coefficient of variation. We identify two types of mixed states, where differ- ent phases coexist in certain regions of the phase space. More notably, our boundary between high and low activity states depends predominantly on the relation between excitatory and inhibitory synaptic strength instead of the input rate.Key words: Artificial neural networks, Data analysis, Simulation, Spiking neurons. This work is supported by EC IP project FP6-015879 (FACETS).
-
Emmanuel Daucé, Laurent Perrinet. Computational Neuroscience, from Multiple Levels to Multi-level, URL . Journal of Physiology (Paris), 104(1--2):1--4, 2010
Despite the long and fruitful history of neuroscience, a global, multi-level description of cardinal brain functions is still far from reach. Using analytical or numerical approaches, \emphComputational Neuroscience aims at the emergence of such common principles by using concepts from Dynamical Systems and Information Theory. The aim of this Special Issue of the Journal of Physiology (Paris) is to reflect the latest advances in this field which has been presented during the NeuroComp08 conference that took place in October 2008 in Marseille (France). By highlighting a selection of works presented at the conference, we wish to illustrate the intrinsic diversity of this field of research but also the need of an unification effort that is becoming more and more necessary to understand the brain in its full complexity, from multiple levels of description to a multi-level understanding.
-
Laurent U. Perrinet. Role of homeostasis in learning sparse representations, URL . Neural Computation, 22(7):1812--36, 2010
Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding coupled with Hebbian learning and homeostasis have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism which optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair: By contributing to optimize statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.
references of articles and proceedings
Pierre Yger, Daniel Bruderle, Jochen Eppler, Jens Kremkow, Dejan Pecevski, Laurent Perrinet, Michael Schmuker, Eilif Muller, Andrew P Davison. NeuralEnsemble: Towards a meta-environment for network modeling and data analysis, URL . In Eighth Göttingen Meeting of the German Neuroscience Society, pages T26-4C. 2009 NeuralEnsemble (http://neuralensemble.org) is a multilateral effort to coordinate and organise neuroscience software development efforts based around the Python programming language into a larger, meta-simulator software system. To this end, NeuralEnsemble hosts services for source code management and bug tracking (Subversion/Trac) for a number of open-source neuroscience tools, organizes an annual workshop devoted to collaborative software development in neuroscience, and manages a google-group discussion forum. Here, we present two NeuralEnsemble hosted projects:PyNN (http://neuralensemble.org/PyNN) is a package for simulator-independent specification of neuronal network models. You can write the code for a model once, using the PyNN API, and then run it without modification on any simulator that PyNN supports. Currently NEURON, NEST, PCSIM and a VLSI hardware implementation are fully supported.NeuroTools (http://neuralensemble.org/NeuroTools) is a set of tools to manage, store and analyse computational neuroscience simulations. It has been designed around PyNN, but can also be used for data from other simulation environments or even electrophysiological measurements.We will illustrate how the use of PyNN and NeuroTools ease the developmental process of models in computational neuroscience, enhancing collaboration between different groups and increasing the confidence in correctness of results. NeuralEnsemble efforts are supported by the European FACETS project (EU-IST-2005-15879)
Adrien Wohrer, Guillaume Masson, Laurent Perrinet, Pierre Kornprobst, Thierry Vieville. Contrast sensitivity adaptation in a virtual spiking retina and its adequation with mammalians retinas. In Perception, pages 67. 2009
-
Nicole Voges, Laurent Perrinet. Phase space analysis of networks based on biologically realistic parameters, URL . Journal of Physiology (Paris), 104(1-2):51--60, 2010
We study cortical network dynamics for a more realistic network model. It represents, in terms of spatial scale, a large piece of cortex allowing for long-range connections, resulting in a rather sparse connectivity. We use two different types of conductance-based I&F neurons as excitatory and in- hibitory units, as well as specific connection probabilities. In order to re- main computationally tractable, we reduce neuron density, modelling part of the missing internal input via external poissonian spike trains. Compared to previous studies, we observe significant changes in the dynamical phase space: Altered activity patterns require another regularity measure than the coefficient of variation. We identify two types of mixed states, where differ- ent phases coexist in certain regions of the phase space. More notably, our boundary between high and low activity states depends predominantly on the relation between excitatory and inhibitory synaptic strength instead of the input rate.Key words: Artificial neural networks, Data analysis, Simulation, Spiking neurons. This work is supported by EC IP project FP6-015879 (FACETS).
Nicole Voges, Laurent Perrinet. Dynamics of cortical networks including long-range patchy connections, URL . In Eighth Göttingen Meeting of the German Neuroscience Society, pages T26-3C. 2009 Most studies of cortical network dynamics are either based on purely random wiring or neighborhood couplings [1], focussing on a rather local scale. Neuronal connections in the cortex, however, show a more complex spatial pattern composed of local and long-range patchy connections [2,3] as shown in the figure: It represents a tracer injection (gray areas) in the GM of a flattened cortex (top view): Black dots indicate neuron positions, blue lines their patchy axonal ramifications, and red lines represent the local connections. Moreover, to include distant synapses, one has to enlarge the spatial scale from the typically assumed 1mm to 5mm side length.As it is our aim to analyze more realistic network models of the cortex we assume a distance dependent connectivity that reflects the geometry of dendritesand axons [3]. Here, we ask to what extent the assumption of specific geometric traits influences the resulting dynamical behavior of these networks. Analyzing various characteristic measures that describe spiking neurons (e.g., coefficient of variation, correlation coefficient), we compare the dynamical state spaces of different connectivity types: purely random or purely local couplings, a combination of local and distant synapses, and connectivity structures with patchy projections.On top of biologically realistic background states, a stimulus is applied in order to analyze their stabilities. As previous studies [1], we also find different dynamical states depending on the external input rate and the numerical relation between excitatory and inhibitory synaptic weights. Preliminary results indicate, however, that transitions between these states are much sharper in case of local or patchy couplings.This work is supported by EU Grant 15879 (FACETS). Thanks to Stefan Rotter who supervised the PhD project [3] this work is based on. Network dynamics are simulated with NEST/PyNN [4].[1] A. Kumar, S. Schrader, A. Aertsen and S. Rotter, Neural Computation 20, 2008, 1-43. [2] T. Binzegger, R.J. Douglas and K.A.C. Martin, J. of Neurosci., 27(45), 2007, 12242-12254. [3] Voges N, Fakultaet fuer Biologie, Albert-Ludwigs-Universitaet Freiburg, 2007. [4] NEST. M.O. Gewaltig and M. Diesmann, Scholarpedia 2(4):1430.
-
Nicole Voges, Laurent U. Perrinet. Dynamical state spaces of cortical networks representing various horizontal connectivities, URL . In Proceedings of COSYNE, 2009
Most studies of cortical network dynamics are either based on purely random wiring or neighborhood couplings, e.g., [Kumar, Schrader, Aer tsen, Rotter, 2008, Neural Computation 20, 1--43]. Neuronal connections in the cortex, however, show a complex spatial pattern composed of local and long-range connections, the latter featuring a so-called patchy projection pattern, i.e., spatially clustered synapses [Binzegger, Douglas, Martin, 2007, J. Neurosci. 27(45), 12242--12254]. The idea of our project is to provide and to analyze probabilistic network models that more adequately represent horizontal connectivity in the cortex. In particular, we investigate the effect of specific projection patterns on the dynamical state space of cortical networks. Assuming an enlarged spatial scale we employ a distance dependent connectivity that reflects the geometr y of dendrites and axons. We simulate the network dynamics using a neuronal network simulator NEST/PyNN. Our models are composed of conductance based integrate-and-fire neurons, representing fast spiking inhibitor y and regular spiking excitator y cells. In order to compare the dynamical state spaces of previous studies with our network models we consider the following connectivity assumptions: purely random or purely local couplings, a combination of local and distant synapses, and connectivity structures with patchy projections. Similar to previous studies, we also find different dynamical states depending on the input parameters: the external input rate and the numerical relation between excitatory and inhibitory synaptic weights. These states, e.g., synchronous regular (SR) or asynchronous irregular (AI) firing, are characterized by measures like the mean firing rate, the correlation coefficient, the coefficient of variation and so forth. On top of identified biologically realistic background states (AI), stimuli are applied in order to analyze their stability. Comparing the results of our different network models we find that the parameter space necessary to describe all possible dynamical states of a network is much more concentrated if local couplings are involved. The transition between different states is shifted (with respect to both input parameters) and sharpened in dependence of the relative amount of local couplings. Local couplings strongly enhance the mean firing rate, and lead to smaller values of the correlation coefficient. In terms of emergence of synchronous states, however, networks with local versus non-local or patchy versus random remote connections exhibit a higher probability of synchronized spiking. Concerning stability, preliminary results indicate that again networks with local or patchy connections show a higher probability of changing from the AI to the SR state. We conclude that the combination of local and remote projections bears important consequences on the activity of network: The apparent differences we found for distinct connectivity assumptions in the dynamical state spaces suggest that network dynamics strongly depend on the connectivity structure. This effect might be even stronger with respect to the spatio-temporal spread of signal propagation. This work is suppor ted by EC IP project FP6-015879 (FACETS).
Nicole Voges, Laurent Perrinet. Recurrent cortical networks with realistic horizontal connectivities show complex dynamics, URL . In Eighteenth Annual Computational Neuroscience Meeting: CNS*2009 Berlin, Germany. 18–23 July 2009, pages T26-3C + 10(Suppl 1):P176. 2009 Most studies on the dynamics of recurrent cortical networks are either based on purely randomwiring or neighborhood couplings. They deal with a local spatial scale, where approx.10% of all possible connections are realized. Neuronal wiring in the cortex, however, shows acomplex spatial pattern composed of local and long-range patchy connections, i.e. spatiallyclustered synapses.We ask to what extent such geometric traits influence the ’idle’ dynamics of cortical networkmodels. Assuming an enlarged spatial scale we consider distinct network architectures, rang-ing from purely random to distance dependent connectivities with patchy projections. Thelatter are tuned to reflect the axonal arborizations present in layer 2/3 of cat V1. We con-sider different types of conductance based integrate-and-fire neurons with distance-dependentsynaptic delays.Analyzing the characteristic measures describing spiking neuronal networks (e.g. correlations,regularity), we explore and compare the phase spaces and activity patterns of different typesof network models. To examine stability and signal propagation properties we additionallyapplied local activity injections.Similar to previous studies we observe synchronous regular firing (SR state) for large νext andlow inhibition, while small νext combined with large g results in asynchronous irregular firing(AI). Our SRslow and SI state, the occurrence of ’mixed’ states, and the more vertical phasespace border significantly differ from previous findings.
Nicole Voges, Laurent U. Perrinet. Analyzing cortical network dynamics with respect to different connectivity assumptions, URL . In Proceedings of the second french conference on Computational Neuroscience, Marseille, 2008
Nicole Voges, Jens Kremkow, Laurent U. Perrinet. Dynamics of cortical networks based on patchy connectivity patterns. In FENS Abstract, 2008
Claudio Simoncini, Laurent U. Perrinet, Anna Montagnini, Pascal Mamassian, Guillaume S. Masson. Different pooling of motion information for perceptual speed discrimination and behavioral speed estimation. In Vision Science Society, 2010
-
Laurent U. Perrinet. Role of homeostasis in learning sparse representations, URL . Neural Computation, 22(7):1812--36, 2010
Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding coupled with Hebbian learning and homeostasis have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism which optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair: By contributing to optimize statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.
Laurent Perrinet, Guillaume S. Masson. Dynamical emergence of a neural solution for motion integration, URL . In Proceedings of AREADNE, 2010
Laurent Perrinet. Qui créera le premier calculateur intelligent?, URL . DocSciences, (13), 2010
-
Laurent Perrinet, Alexandre Reynaud, Frédéric Chavane, Guillaume S. Masson. Inferring monkey ocular following responses from V1 population dynamics using a probabilistic model of motion integration, URL . In Vision Science Society, 2009
Short presentation of a large moving pattern elicits an ocular following response that exhibits many of the properties attributed to low-level motion processing such as spatial and temporal integration, contrast gain control and divisive interaction between competing motions. Similar mechanisms have been demonstrated in V1 cortical activity in response to center-surround gratings patterns measured with real-time optical imaging in awake monkeys (see poster of Reynaud et al., VSS09). Based on a previously developed Bayesian framework, we have developed an optimal statistical decoder of such an observed cortical population activity as recorded by optical imaging. This model aims at characterizing the statistical dependence between early neuronal activity and ocular responses and its performance was analyzed by comparing this neuronal read-out and the actual motor responses on a trial-by-trial basis. First, we show that relative performance of the behavioral contrast response function is similar to the best estimate obtained from the neural activity. In particular, we show that the latency of ocular response increases with low contrast conditions as well as with noisier instances of the behavioral task as decoded by the model. Then, we investigate the temporal dynamics of both neuronal and motor responses and show how motion information as represented by the model is integrated in space to improve population decoding over time. Lastly, we explore how a surrounding velocity non congruous with the central excitation information shunts the ocular response and how it is topographically represented in the cortical activity. Acknowledgment: European integrated project FACETS IST-15879.
-
Laurent Perrinet, Nicole Voges, Jens Kremkow, Guillaume S. Masson. Decoding center-surround interactions in population of neurons for the ocular following response , URL . In Proceedings of COSYNE, 2009
Short presentation of a large moving pattern elicits an Ocular Following Response (OFR) that exhibits many of the properties attributed to low-level motion processing such as spatial and temporal integration, contrast gain control and divisive interaction between competing motions. Similar mechanisms have been demonstrated in V1 cortical activity in response to center-surround gratings patterns measured with real-time optical imaging in awake monkeys. More recent experiments of OFR have used disk gratings and bipartite stimuli which are optimized to study the dynamics of center-surround integration. We quantified two main characteristics of the global spatial integration of motion from an intermediate map of possible local translation velocities: (i) a finite optimal stimulus size for driving OFR, surrounded by an antagonistic modulation and (ii) a direction selective suppressive effect of the surround on the contrast gain control of the central stimuli [Barthelemy06,Barthelemy07].In fact, the machinery behind the visual perception of motion and the subsequent sensorimotor transformation is confronted to uncertainties which are efficiently resolved in the primate's visual system. We may understand this response as an ideal observer in a probabilistic framework by using Bayesian theory [Weiss02] and we extended in the dynamical domain the ideal observer model to simulate the spatial integration of the different local motion cues within a probabilistic representation. We proved that this model is successfully adapted to model the OFR for the different experiments [Perrinet07neurocomp], that is for different levels of noise with full field gratings, with disks of various sizes and also for the effect of a flickering surround. However, another \emphad hoc inhibitory mechanism has to be added in this model to account for suppressive effects of the surround.We explore here an hypothesis where this could be understood as the effect of a recurrent prediction of information in the velocity map. In fact, in previous models, the integration step assumes independence of the local information while natural scenes are very predictable: Due to the rigidity and inertia of physical objects in visual space, neighboring local spatiotemporal information is redundant and one may introduce this \empha priori knowledge of the statistics of the input in the ideal observer model. We implement this in a realistic model of a layer representing velocities in a map of cortical columns, where predictions are implemented by lateral interactions within the cortical area. First, raw velocities are estimated locally from images and are propagated to this area in a feed-forward manner. Using this velocity map, we progressively learn the dependence of local velocities in a second layer of the model. This algorithm is cyclic since the prediction is using the local velocities which are themselves using both the feed-forward input and the prediction: We control the convergence of this process by measuring results for different learning rate. Results show that this simple model is sufficient to disambiguate characteristic patterns such as the Barber-Pole illusion. Due to the recursive network which is modulating the velocity map, it also explains that the representation may exhibit some memory, such as when an object suddenly disappears or when presenting a dot followed by a line (line-motion illusion).Finally, we applied this model that was tuned over a set of natural scenes to gratings of increasing sizes. We observed first that the feed-forward response as tuned to neurophysiological data gave lower responses at higher eccentricities, and that this effect was greater for higher grating frequencies. Then, we observed that depending on the size of the disk and on its spatial frequency, the recurrent network of lateral interactions Lastly, we explore how a surrounding velocity non congruous with the central excitation information shunts the ocular response and how it is topographically represented in the cortical activity. ,
Laurent Perrinet, Guillaume S. Masson. Decoding the population dynamics underlying ocular following response using a probabilistic framework. In Eighteenth Annual Computational Neuroscience Meeting: CNS*2009 Berlin, Germany. 18--23 July 2009, pages 10(Suppl 1):P359. 2009
Laurent Perrinet. Adaptive Sparse Spike Coding : applications of Neuroscience to the compression of natural images, URL . In Optical and Digital Image Processing Conference 7000 - Proceedings of SPIE Volume 7000, 7 - 11 April 2008, pages 15 - S4. 2008 If modern computers are sometimes superior to cognition in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing a relative or following an object in a complex background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the efficiency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will particularly focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.
Laurent Perrinet, Guillaume S. Masson. Modeling spatial integration in the ocular following response to center-surround stimulation using a probabilistic framework, URL . In Proceedings of COSYNE, 2008, 2008,
Laurent Perrinet. What adaptive code for efficient spiking representations? A model for the formation of receptive fields of simple cells., URL . In Proceedings of COSYNE, 2008
Laurent Perrinet, Guillaume S. Masson. Decoding the population dynamics underlying ocular following responseusing a probabilistic framework, URL . In Proceedings of AREADNE, 2008
-
Laurent Perrinet, Guillaume S. Masson. Modeling spatial integration in the ocular following response using a probabilistic framework, URL . Journal of Physiology (Paris), 2007
The machinery behind the visual perception of motion and the subsequent sensori-motor transformation, such as in Ocular Following Response (OFR), is confronted to uncertainties which are efficiently resolved in the primate's visual system. We may understand this response as an ideal observer in a probabilis- tic framework by using Bayesian theory (Weiss et al., 2002) which we previously proved to be successfully adapted to model the OFR for different levels of noise with full field gratings (Perrinet et al., 2005). More recent experiments of OFR have used disk gratings and bipartite stimuli which are optimized to study the dy- namics of center-surround integration. We quantified two main characteristics of the spatial integration of motion : (i) a finite optimal stimulus size for driving OFR, surrounded by an antagonistic modulation and (ii) a direction selective sup- pressive effect of the surround on the contrast gain control of the central stim- uli (Barth\'elemy et al., 2006). Herein, we extended the ideal observer model to simulate the spatial integration of the different local motion cues within a proba- bilistic representation. We present analytical results which show that the hypoth- esis of independence of local measures can describe the integration of the spatial motion signal. Within this framework, we successfully accounted for the con- trast gain control mechanisms observed in the behavioral data for center-surround stimuli. However, another inhibitory mechanism had to be added to account for suppressive effects of the surround.
-
Laurent Perrinet. On efficient sparse spike coding schemes for learning natural scenes in the primary visual cortex, URL . In Sixteenth Annual Computational Neuroscience Meeting: CNS*2007, Toronto, Canada. 7--12 July 2007, 2007
We describe the theoretical formulation of a learning algorithm in a model of the primary visual cortex (V1) and present results of the efficiency of this algorithm by comparing it to the SparseNet algorithm [1]. As the SparseNet algorithm, it is based on a model of signal synthesis as a Linear Generative Model but differs in the efficiency criteria for the representation. This learning algorithm is in fact based on an efficiency criteria based on the Occam razor: for a similar quality, the shortest representation should be privileged. This inverse problem is NP-complete and we propose here a greedy solution which is based on the architecture and nature of neural computations [2]). It proposes that the supra-threshold neural activity progressively removes redundancies in the representation based on a correlation-based inhibition and provides a dynamical implementation close to the concept of neural assemblies from Hebb [3]). We present here results of simulation of this network with small natural images (available at https://laurentperrinet.github.io/publication/perrinet-19-hulk) and compare it to the Sparsenet solution. Extending it to realistic images and to the NEST simulator http://www.nest-initiative.org/, we show that this learning algorithm based on the properties of neural computations produces adaptive and efficient representations in V1. 1. Olshausen B, Field DJ: Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Res 1997, 37:3311-3325.2. Perrinet L: Feature detection using spikes: the greedy approach. J Physiol Paris 2004, 98(4–6):530-539.3. Hebb DO: The organization of behavior. Wiley, New York; 1949.
-
Laurent Perrinet, Frédéric V. Barthélemy, Guillaume S. Masson. Input-output transformation in the visuo-oculomotor loop: modeling the ocular following response to center-surround stimulation in a probabilistic framework. In 1ère conférence francophone NEUROsciences COMPutationnelles - NeuroComp, 2006
The quality of the representation of an object's motion is limited by the noise in the sensory input as well as by an intrinsic ambiguity due to the spatial limi- tation of the visual motion analyzers (aperture prob- lem). Perceptual and oculomotor data demonstrate that motion processing of extended ob jects is initially dominated by the local 1D motion cues orthogonal to the ob ject's edges, whereas 2D information takes pro- gressively over and leads to the final correct represen- tation of global motion. A Bayesian framework ac- counting for the sensory noise and general expectancies for ob ject velocities has proven successful in explaining several experimental findings concerning early motion processing [1, 2, 3]. However, a complete functional model, encompassing the dynamical evolution of ob- ject motion perception is still lacking. Here we outline several experimental observations concerning human smooth pursuit of moving ob jects and more particu- larly the time course of its initiation phase. In addi- tion, we propose a recursive extension of the Bayesian model, motivated and constrained by our oculomotor data, to describe the dynamical integration of 1D and 2D motion information.
Laurent Perrinet, Jens Kremkow, Frédéric Barthélemy, Guillaume S. Masson, Frédéric Chavane. Input-output transformation in the visuo-oculomotor loop: modeling the ocular following response to center-surround stimulation in a probabilistic framework. In FENS, 2006
Laurent Perrinet, Jens Kremkow. Dynamical contrast gain control mechanisms in a layer 2/3 model of the primary visual cortex. In The Functional Architecture of the Brain : from Dendrites to Networks. Symposium in honour of Dr Suzanne Tyc-Dumont. 4- 5 May 2006. GLM, Marseille, France, 2006 Computations in a cortical column are characterized by the dynamical, event-based nature of neuronal signals and are structured by the layered and parallel structure of cortical areas. But they are also characterized by their efficiency in terms of rapidity and robustness. We propose and study here a model of information integration in the primary visual cortex (V1) thanks to the parallel and interconnected network of similar cortical columns. In particular, we focus on the dynamics of contrast gain control mechanisms as a function of the distribution of information relevance in a small population of cortical columns. This cortical area is modeled as a collection of similar cortical columns which receive input and are linked according to a specific connectivity pattern which is relevant to this area. These columns are simulated using the \sc Nest simulator \citepMorrison04 using conductance-based Integrate-and-Fire neurons and consist vertically in 3 different layers. The architecture was inspired by neuro-physiological observations on the influence of neighboring activities on pyramidal cells activity and correlates with the lateral flow of information observed in the primary visual cortex, notably in optical imaging experiments \citepJancke04, and is similar in its final implementation to local micro-circuitry of the cortical column presented by \citetGrossberg05. %They show prototypical spontaneous dynamical behavior to different levels of noise which are relevant to the generic modeling of biological cortical columns \citepKremkow05. In the future, the connectivity will be derived from an algorithm that was used for modeling the transient spiking response of a layer of neurons to a flashed image and which was based on the Matching Pursuit algorithm \citepPerrinet04. %The visual input is first transmitted from the Lateral Geniculate Nucleus (LGN) using the model of \citetGazeres98. It transforms the image flow into a stream of spikes with contrast gain control mechanisms specific to the retina and the LGN. This spiking activity converges to the pyramidal cells of layer 2/3 thanks to the specification of receptive fields in layer 4 providing a preference for oriented local contrasts in the spatio-temporal visual flow. In particular, we use in these experiments visual input organized in a center-surround spatial pattern which was optimized in size to maximize the response of a column in the center and to the modulation of this response by the surround (bipartite stimulus). This class of stimuli provide different levels of input activation and of visual ambiguity in the visual space which were present in the spatio-temporal correlations in the input spike flow optimized to the resolution of cortical columns in the visual space. It thus provides a method to reveal the dynamics of information integration and particularly of contrast gain control which are characteristic to the function of V1.
-
Laurent Perrinet. An efficiency razor for model selection and adaptation in the primary visual cortex. In Fifteenth Annual Computational Neuroscience Meeting, 2006
We describe the theoretical formulation of a learning algorithm in a model of the primary visual cortex (V1) and present results of the efficiency of this algorithm by comparing it to the Sparsenet algorithm (Olshausen, 1996). As the Sparsenet algorithm, it is based on a model of signal synthesis as a Linear Generative Model but differs in the efficiency criteria for the representation. This learning algorithm is in fact based on an efficiency criteria based on the Occam razor: for a similar quality, the shortest representation should be privilegied. This inverse problem is NP-complete and we propose here a greedy solution which is based on the architecture and nature of neural computations (Perrinet, 2006). We present here results of a simulation of this network of small natural images (available at https://laurentperrinet.github.io/publication/perrinet-19-hulk ) and compare it to the Sparsenet solution. We show that this solution based on neural computations produces an adaptive algorithm for efficient representations in V1.
Laurent Perrinet, Jens Kremkow. Dynamical contrast gain control mechanisms in a layer 2/3 model of the primary visual cortex. In Physiogenic and pathogenic oscillations: the beauty and the beast, 5th INMED/TINS CONFERENCE SEPTEMBER 9 - 12, 2006, La Ciotat, France, 2006
Laurent Perrinet. Dynamical Neural Networks: modeling low-level vision at short latencies, URL . pages 163--225.
-
Anna Montagnini, Pascal Mamassian, Laurent U. Perrinet, Eric Castet, Guillaume S. Masson. Bayesian modeling of dynamic motion integration, URL . Journal of Physiology (Paris), 101(1-3):64-77, 2007
The quality of the representation of an object's motion is limited by the noise in the sensory input as well as by an intrinsic ambiguity due to the spatial limi- tation of the visual motion analyzers (aperture prob- lem). Perceptual and oculomotor data demonstrate that motion processing of extended ob jects is initially dominated by the local 1D motion cues orthogonal to the ob ject's edges, whereas 2D information takes pro- gressively over and leads to the final correct represen- tation of global motion. A Bayesian framework ac- counting for the sensory noise and general expectancies for ob ject velocities has proven successful in explaining several experimental findings concerning early motion processing [1, 2, 3]. However, a complete functional model, encompassing the dynamical evolution of ob- ject motion perception is still lacking. Here we outline several experimental observations concerning human smooth pursuit of moving ob jects and more particu- larly the time course of its initiation phase. In addi- tion, we propose a recursive extension of the Bayesian model, motivated and constrained by our oculomotor data, to describe the dynamical integration of 1D and 2D motion information.
-
Jens Kremkow, Laurent U. Perrinet, Guillaume S. Masson, Ad Aertsen. Functional consequences of correlated excitatory and inhibitory conductances in cortical networks, URL . Journal of Computational Neuroscience, 28(3):579-94, 2010
Neurons in the neocortex receive a large number of excitatory and inhibitory synaptic inputs. Excitation and inhibition dynamically balance each other, with inhibition lagging excitation by only few milliseconds. To characterize the functional consequences of such correlated excitation and inhibition, we studied models in which this correlation structure is induced by feedforward inhibition (FFI). Simple circuits show that an effective FFI changes the integrative behavior of neurons such that only synchronous inputs can elicit spikes, causing the responses to be sparse and precise. Further, effective FFI increases the selectivity for propagation of synchrony through a feedforward network, thereby increasing the stability to background activity. Last, we show that recurrent random networks with effective inhibition are more likely to exhibit dynamical network activity states as have been observed in vivo. Thus, when a feedforward signal path is embedded in such recurrent network, the stabilizing effect of effective inhibition creates an suitable substrate for signal propagation. In conclusion, correlated excitation and inhibition support the notion that synchronous spiking may be important for cortical processing.
Jens Kremkow. Correlating Excitation and Inhibition in Visual Cortical Circuits: Functional Consequences and Biological Feasibility, PhD thesis. 2009 The primary visual cortex (V1) is one of the most studied cortical area in the brain. Together with the retina and the lateral geniculate nucleus (LGN) it forms the early visual system. Artificial stimuli (i.e. drifting gratings (DG)) have given insights into the neural basis of visual processing. However, recently researchers have started to use more complex natural visual stimuli (NI), arguing that the low dimensional artificial stimuli are not sufficient for a complete understanding of the visual system.For example, whereas the responses of V1 neurons to DG are dense but with variable spike timings, the neurons respond with only few but precise spikes to NI. Furthermore, linear receptive field models provide a good fit to responses during simple stimuli, however, they often fail during NI. To investigate the mechanisms behind the stimulus dependent responses of cortical neurons we have built a biophysical model of the early visual system.Our results show that during NI the LGN afferents show epochs of correlated activity, resulting in precise spike timings in V1. The sparseness of the responses to NI can be explained by correlated inhibitory conductance. We continue by investigating the origin of stimulus dependent nonlinear responses, by comparing models of different complexity. Our results suggest that adaptive processes shape the responses, depending on the temporal properties of the stimuli. Lastly we study the functional consequences of correlated excitatory and inhibitory condutances in more details in generic models.The presented work gives new perspectives on the processing of the early visual system, in particular on the importance of correlated conductances.
Jens Kremkow, Laurent Perrinet, Guillaume S. Masson, Ad Aertsen. Functional consequences of correlated excitation and inhibition on single neuron integration and signal propagation through synfire chains, URL . In Eighth Göttingen Meeting of the German Neuroscience Society, pages T26-6B. 2009 Neurons receive a large number of excitatory and inhibitory synaptic inputs whose temporal interplay determines their spiking behavior. On average, excitation (Gexc) and inhibition (Ginh) balance each other, such that spikes are elicited by fluctuations [1]. In addition, it has been shown in vivo that Gexc and Ginh are correlated, with Ginh lagging Gexc only by few milliseconds (6ms), creating a small temporal integration window [2,3]. This correlation structure could be induced by feed-forward inhibition (FFI), which has been shown to be present at many sites in the central nervous system.To characterize the functional consequences of the FFI, we first modeled a simple circuit using spiking neurons with conductance based synapses and studied the effect on the single neuron integration. We then coupled many of such circuits to construct a feed-forward network (synfire chain [4,5]) and investigated the effect of FFI on signal propagation along such feed-forward network.We found that the small temporal integration window, induced by the FFI, changes the integrative properties of the neuron. Only transient stimuli could produce a response when the FFI was active whereas without FFI the neuron responded to both steady and transient stimuli. Due to the increase in selectivity to transient inputs, the conditions of signal propagation through the feed-forward network changed as well. Whereas synchronous inputs could reliable propagate, high asynchronous input rates, which are known to induce synfire activity [6], failed to do so. In summary, the FFI increased the stability of the synfire chain.Supported by DFG SFB 780, EU-15879-FACETS, BMBF 01GQ0420 to BCCN Freiburg[1] Kumar A., Schrader S., Aertsen A. and Rotter S. (2008). The high-conductance state of cortical networks. Neural Computation, 20(1):1--43. [2] Okun M. and Lampl I. (2008). Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nat Neurosci, 11(5):535--7.[3] Baudot P., Levy M., Marre O., Monier C. and Fr\'egnac (2008). submitted. [4] Abeles M. (1991). Corticonics: Neural circuits of the cerebral cortex. Cambridge, UK [5] Diesmann M., Gewaltig M-O and Aertsen A. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature, 402(6761):529--33. [6] Kumar A., Rotter S. and Aertsen A. (2008), Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model. J Neurosci 28 (20), 5268--80.,
Jens Kremkow, Laurent Perrinet, Cyril Monier, Yves Fregnac, Guillaume S. Masson, Ad Aertsen. Control of the temporal interplay between excitation and inhibition by the statistics of visual input, URL . URL2 . In Eighteenth Annual Computational Neuroscience Meeting: CNS*2009 Berlin, Germany. 18–23 July 2009, pages Oral presentation, 10(Suppl 1):O21. 2009
Jens Kremkow, Laurent Perrinet, Alexandre Reynaud, Ad Aertsen, Guillaume S. Masson, Frédéric Chavane. Dynamics of non-linear cortico-cortical interactions during motion integration in early visual cortex: A spiking neuron model of an optical imaging study in the awake monkey, URL . URL2 . In Eighteenth Annual Computational Neuroscience Meeting: CNS*2009 Berlin, Germany. 18–23 July 2009, pages 10(Suppl 1):P176. 2009
-
Jens Kremkow, Laurent Perrinet, Pierre Baudot, Manu Levy, Olivier Marre, Cyril Monier, Yves Fregnac, Guillaume Masson, Ad Aertsen. Control of the temporal interplay between excitation and inhibition by the statistics of visual input: a V1 network modelling study, URL . In Proceedings of the Society for Neuroscience conference, 2008
In the primary visual cortex (V1), single cell responses to simple visual stimuli (gratings) are usually dense but with a high trial-by-trial variability. In contrast, when exposed to full field natural scenes, the firing patterns of these neurons are sparse but highly reproducible over trials (Marre et al., 2005; Fr\'egnac et al., 2006). It is still not understood how these two classes of stimuli can elicit these two distinct firing behaviours. A common model for simple-cell computation in layer 4 is the ``push-pull'' circuitry (Troyer et al. 1998). It accounts for the observed anti-phase behaviour between excitatory and inhibitory conductances in response to a drifting grating (Anderson et al., 2000; Monier et al., 2008), creating a wide temporal integration window during which excitation is integrated without the shunting or opponent effect of inhibition and allowed to elicit multiple spikes. This is in contrast to recent results from intracellular recordings in vivo during presentation of natural scenes (Baudot et al., submitted). Here the excitatory and inhibitory conductances were highly correlated, with inhibition lagging excitation only by few milliseconds (~6 ms). This small lag creates a narrow temporal integration window such that only synchronized excitatory inputs can elicit a spike, similar to parallel observations in other cortical sensory areas (Wehr and Zador, 2003; Okun and Lampl, 2008). To investigate the cellular and network mechanisms underlying these two different correlation structures, we constructed a realistic model of the V1 network using spiking neurons with conductance based synapses. We calibrated our model to fit the irregular ongoing activity pattern as well as in vivo conductance measurements during drifting grating stimulation and then extracted predicted responses to natural scenes seen through eye-movements. Our simulations reproduced the above described experimental observation, together with anti-phase behaviour between excitation and inhibition during gratings and phase lagged activation during natural scenes. In conclusion, the same cortical network that shows dense and variable responses to gratings exhibits sparse and precise spiking to natural scenes. Work is under way to show to which extent this feature is specific for the feedforward vs recurrent nature of the modelled circuit. ,
Jens Kremkow, Laurent U. Perrinet, Ad Aertsen, Guillaume S. Masson. Functional properties of feed-forward inhibition, URL . In Proceedings of the second french conference on Computational Neuroscience, Marseille, 2008
-
Jens Kremkow, Laurent Perrinet, Arvind Kumar, Ad Aertsen, Guillaume Masson. Synchrony in thalamic inputs enhances propagation of activity through cortical layers, URL URL2 . In Sixteenth Annual Computational Neuroscience Meeting: CNS*2007, Toronto, Canada. 7--12 July 2007, 2007
Sensory input enters the cortex via the thalamocortical (TC) projection, where it elicits large postsynaptic potentials in layer 4 neurons [1]. Interestingly, the TC connections account for only 15% of synapses onto these neurons. It has been therefore controversially discussed how thalamic input can drive the cortex. Strong TC synapses have been one suggestion to ensure the strength of the TC projection ("strong-synapse model"). Another possibility is that the excitation from single thalamic fibers are weak but get amplified by recurrent excitatory feedback in layer 4 ("amplifier model"). Bruno and Sakmann [2] recently provided new evidence that individual TC synapses in vivo are weak and only produce small excitatory postsynaptic potentials. However, they suggested that thalamic input can activate the cortex due to the synchronous firing and that cortical amplification is not required. This would support the "synchrony model" proposed by correlation analysis [3].Here, we studied the effect of correlation in the TC input, with weak synapses, to the responses of a layered cortical network model. The connectivity of the layered network was taken from Binzegger et al. 2004 [4]. The network was simulated using NEST [5] with the Python interface PyNN [6] to enable interoperability with different simulators. The sensory input to layer 4 was modelled by a simple retino-geniculate model of the transformation of light into spike trains [7], which was implemented by leaky integrate-and-fire model neurons.We found that introducing correlation into TC inputs enhanced the likelihood to produce responses in layer 4 and improved the activity propagation across layers. In addition, we compared the response of the cortical network to different noise conditions and obtained contrast response functions which were in accordance with neurophysiological observations. This Work is supported by the 6th RFP of the EU (grant no. 15879-FACETS) and by the BMBF grant 01GQ0420 to the BCCN Freiburg.1. Chung S, Ferster D: Strength and orientation tuning of the thalamic input to simple cells revealed by electrically evoked cortical suppression. Neuron 1998, 20:1177-1189. 2. Bruno M, Sakmann B: Cortex is driven by weak but synchronously active thalamocortical synpases. Science 2006, 312:1622-1627. 3. Alonso JM, Usrey WM, Reid RC: Precisely correlated firing in cells of the lateral geniculate nucleus. Nature 1996, 383:815-819. 4. Binzegger T, Douglas RJ, Martin KAC: A quantitative map of the circuit of the cat primary visual cortex. J Neurosci 2004, 24:8441-8453. 5. NEST http://www.nest-initiative.org. PyNN http://neuralensemble.org/PyNN. Gazeres N, Borg-Graham LJ, Fr\'egnac Y: A phenomenological model of visually evoked spike trains in cat geniculate nonlagged X-cells.Vis Neurosci 1998, 15:1157-1174.
Mina Aliakbari Khoei, Laurent Perrinet, Guillaume S. Masson. Dynamical emergence of a neural solution for motion integration, URL . In Proceedings of Tauc, 2010
Sylvain Fischer, Filip Sroubek, Laurent U. Perrinet, Rafael Redondo, Gabriel Crist\'obal. Self-invertible 2D log-Gabor wavelets, URL . Int. Journal of Computional Vision, 2007
Sylvain Fischer, Rafael Redondo, Laurent Perrinet, Gabriel Crist\'obal. Sparse approximation of images inspired from the functional architecture of the primary visual areas, URL URL2 . EURASIP Journal on Advances in Signal Processing, special issue on Image Perception, :Article ID 90727, 16 pages, 2007
-
Andrew P Davison, Daniel Bruderle, Jochen Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, Pierre Yger. PyNN: A Common Interface for Neuronal Network Simulators., URL . Frontiers in Neuroinformatics, 2:11, 2008
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.
-
Andrew Davison, Pierre Yger, Jens Kremkow, Laurent Perrinet, Eilif Muller. PyNN: towards a universal neural simulator API in Python, URL URL2 . In Sixteenth Annual Computational Neuroscience Meeting: CNS*2007, Toronto, Canada. 7--12 July 2007, 2007
Trends in programming language development and adoption point to Python as the high-level systems integration language of choice. Python leverages a vast developer-base external to the neuroscience community, and promises leaps in simulation complexity and maintainability to any neural simulator that adopts it. PyNN http://neuralensemble.org/PyNN strives to provide a uniform application programming interface (API) across neural simulators. Presently NEURON and NEST are supported, and support for other simulators and neuromorphic VLSI hardware is under development.With PyNN it is possible to write a simulation script once and run it without modification on any supported simulator. It is also possible to write a script that uses capabilities specific to a single simulator. While this sacrifices simulator-independence, it adds flexibility, and can be a useful step in porting models between simulators. The design goals of PyNN include allowing access to low-level details of a simulation where necessary, while providing the capability to model at a high level of abstraction, with concomitant gains in development speed and simulation maintainability.Another of our aims with PyNN is to increase the productivity of neuroscience modeling, by making it faster to develop models de novo, by promoting code sharing and reuse across simulator communities, and by making it much easier to debug, test and validate simulations by running them on more than one simulator. Modelers would then become free to devote more software development effort to innovation, building on the simulator core with new tools such as network topology databases, stimulus programming, analysis and visualization tools, and simulation accounting. The resulting, community-developed 'meta-simulator' system would then represent a powerful tool for overcoming the so-called complexity bottleneck that is presently a major roadblock for neural modeling.
-
Emmanuel Daucé, Laurent Perrinet. Computational Neuroscience, from Multiple Levels to Multi-level, URL . Journal of Physiology (Paris), 104(1--2):1--4, 2010
Despite the long and fruitful history of neuroscience, a global, multi-level description of cardinal brain functions is still far from reach. Using analytical or numerical approaches, \emphComputational Neuroscience aims at the emergence of such common principles by using concepts from Dynamical Systems and Information Theory. The aim of this Special Issue of the Journal of Physiology (Paris) is to reflect the latest advances in this field which has been presented during the NeuroComp08 conference that took place in October 2008 in Marseille (France). By highlighting a selection of works presented at the conference, we wish to illustrate the intrinsic diversity of this field of research but also the need of an unification effort that is becoming more and more necessary to understand the brain in its full complexity, from multiple levels of description to a multi-level understanding.
B. Cessac, E. Daucé, Laurent U. Perrinet, M. Samuelides. Topics in Dynamical Neural Networks: From Large Scale Neural Networks to Motor Control and Vision, `URL <https://laurentperrinet.github.io/publication/cessac-07>`__ . . Springer Berlin / Heidelberg, 2007.
Amarender Bogadhi, Anna Montagnini, Pascal Mamassian, Laurent U. Perrinet, Guillaume S. Masson. A recurrent Bayesian model of dynamic motion integration for smooth pursuit. In Vision Science Society, 2010
-
Amarender Bogadhi, Anna Montagnini, Pascal Mamassian, Laurent U. Perrinet, Guillaume S. Masson. Pursuing motion illusions: a realistic oculomotor framework for Bayesian inference, URL . Vision Research, 51(8):867--80, 2011
Accuracy in estimating an object's global motion over time is not only affected by the noise in visual motion information but also by the spatial limitation of the local motion analyzers (aperture problem). Perceptual and oculomotor data demonstrate that during the initial stages of the motion information processing, 1D motion cues related to the object's edges have a dominating influence over the estimate of the object's global motion. However, during the later stages, 2D motion cues related to terminators (edge-endings) progressively take over, leading to a final correct estimate of the object's global motion. Here, we propose a recursive extension to the Bayesian framework for motion processing (Weiss, Simoncelli, & Adelson, 2002) cascaded with a model oculomotor plant to describe the dynamic integration of 1D and 2D motion information in the context of smooth pursuit eye movements. In the recurrent Bayesian framework, the prior defined in the velocity space is combined with the two independent measurement likelihood functions, representing edge-related and terminator-related information, respectively to obtain the posterior. The prior is updated with the posterior at the end of each iteration step. The maximum-a posteriori (MAP) of the posterior distribution at every time step is fed into the oculomotor plant to produce eye velocity responses that are compared to the human smooth pursuit data. The recurrent model was tuned with the variance of pursuit responses to either "pure" 1D or "pure" 2D motion. The oculomotor plant was tuned with an independent set of oculomotor data, including the effects of line length (i.e. stimulus energy) and directional anisotropies in the smooth pursuit responses. The model not only provides an accurate qualitative account of dynamic motion integration but also a quantitative account that is close to the smooth pursuit response across several conditions (three contrasts and three speeds) for two human subjects.
Frédéric Barthélemy, Laurent Perrinet, Éric Castet, Guillaume S. Masson. Dynamics of distributed 1D and 2D motion representations for short-latency ocular following, URL . Vision Research, 48(4):501--22, 2008