ANR CausaL (2018/2022)

ANR CausaL (2018/2022)

Humans have an extraordinary capacity to infer cause-effect relations. In particular, we excel in forming ​beliefs ​about the ​causal effect of actions​. Causal learning provides the basis for rational decision-making and allows people to engage in meaningful life and social interactions. Causal learning is a form of goal-directed learning, defined as the capacity to rapidly learn the consequence of actions and to select behaviours according to goals and motivational state. This ability is based on internal models of the consequence of our behaviors​ and relies on learning rules determined by the​ contingency between actions and outcomes​. At a first approximation, contingency Δ​P ​is operationalized as the difference between two conditional probabilities: i) P(O|A), the probability of outcome O given action A; ii) P(O|¬A), the probability of the outcome when the action is withheld. In everyday life, people perceive their actions as causing a given outcome if the contingency is positive, whereas they perceive them as preventing​ ​it​ ​if​ ​negative;​ ​when​ ​P(O|A)​ ​and​ ​P(O|¬A)​ ​are​ ​equal,​ ​people​ ​report​ ​no​ ​causal​ ​effect​​ ​. Despite the centrality of causal learning, a clear understanding of both the internal computations and neural substrates (the so-called ​cognitive architectures​) is currently missing. ​Our project will therefore address​ ​two​ ​key​ ​questions:

  1. What are the key ​internal representations of causal beliefs and what are the ​computational processes​​ ​that​ ​enable​ ​their​ ​formation​ ​during​ ​learning?

  2. How ​​are ​​internal​ ​representations​ ​and ​​computational​​ processes​ ​​implemented​ ​​in ​​the ​​brain? CausaL​ ​​will​ ​address​ ​these​ ​two​ ​objectives​ ​through​ ​two​ ​dedicated​ ​research​ ​work​ ​packages​ ​(WPs).


This work was supported by ANR project ANR-18-AAPG–“CAUSAL, Cognitive Architectures of  Causal  Learning”.
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.


Animal behavior has to constantly adapt to changes, for instance when unexpectedly switching the state of an environmental context. For an agent interacting with this kind of volatile environment, it is important to respond to such switches accurately and with the shortest delay. However, this operation has in general to be performed in presence of noisy sensory inputs and solely based on the accumulated information. It has already been shown that human observers can accurately anticipate the motion direction of a visual target with their eye movements when this random sequence of rightward/leftward motions is defined by a bias in direction probability. Here, we generalized the capacity of these observers to anticipate different random biases within random-length contextual blocks. Experimental results were compared to those of a probabilistic agent which is optimal with respect to this switching model. We found a better fit between the behaviorally observed anticipatory response with that of the probabilistic agent compared to other models such as a leaky integrator model. Moreover, we could similarly fit the level of confidence reported by human observers with that provided by the model and derive a common marker for subject inter-variability, titrating their level of preference between exploration and exploitation. Such results provide evidence that in such a volatile environment human observers may still efficiently represent an internal belief, along with its precision, and use this representation for sensorimotor control as well as for explicit judgments. This work proposes a novel approach to more generically test human cognitive abilities in uncertain and dynamic environments.