CANCELLED - Discovering dynamic states of neural populations

Neural responses and behavior are influenced by internal brain states, such as arousal or task context. Ongoing variations of these internal states affect global patterns of neural activity, giving rise to apparent variability of neural responses under the same experimental conditions. Uncovering dynamics of internal states from data proved difficult with traditional techniques based on trial-averaged responses of single neurons.

ANNULÉ - Deliberate ignorance: The curious choice not to know

Western history of thought abounds with claims that knowledge is valued and sought. Yet people often choose not to know. We call the conscious choice not to seek or use knowledge (or information) deliberate ignorance. Using examples from a wide range of domains, we demonstrate that deliberate ignorance has important functions. We systematize types of deliberate ignorance, describe their functions, discuss their normative desirability, and consider how they can be modeled. We conclude that the desire not to know is no anomaly.

Metacontrol of reinforcement learning

Modern theories of reinforcement learning posit two systems competing for control of behavior: a "model-free" or "habitual" system that learns cached state-action values, and a "model-based" or  "goal-directed" system that learns a world model which is then used to plan actions. I will argue that humans can adaptively invoke model-based computation when its benefits outweigh its costs. A simple meta-control learning rule can capture the dynamics of this cost-benefit analysis. Neuroimaging evidence points to the role of cognitive control regions in this computation.

Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets

The way how recurrently connected networks of spiking neurons in the
brain acquire powerful information processing capabilities through
learning has remained a mystery. This lack of understanding is linked
to a lack of learning algorithms for recurrent networks of spiking
neurons (RSNNs) that are both functionally powerful and can be
implemented by known biological mechanisms. The gold standard for
learning in recurrent neural networks in machine learning is
back-propagation through time (BPTT), which implements stochastic

Journée de Rencontres des Départements Scientifiques de l'ENS

Organisée par Yves Laszlo et Nicolas Baumard.

Cette journée de rencontre vise à promouvoir la collaboration scientifique entre départements et à créer de nouveaux projets interdisciplinaires.


9:30 INTRODUCTION - Yves Laszlo

9:45 COMPUTER SCIENCE
With presentations of collaborations with Biology and Cognitive Sciences

9:45 ‘Recent advances in machine learning’ Francis Bach