ENS, salle U207, 29 rue d'Ulm, 75005 Paris
PROGRAMME
2pm - 2.45pm Brice Bathellier (Paris-Saclay Institute of Neurosciences): Deciphering and manipulating neural population codes for auditory perception in mice
Perception of speech, music, or simply the interpretation of various acoustic event like mechanical shocks relies on the recognition of complex acoustic features that include multi-frequency patterns but also temporal modulations of frequency and intensity. Such complex features activate highly overlapping sets of receptor cells in the cochlea, raising the question of how the brain constructs clearly distinct percepts from this densely organized input information. To investigate this question in the mouse auditory system, we compared large scale neural population representations for a number of simple (pure tones) and complex sounds, at different stages of the auditory system, including the cochlea, the inferior colliculus, and the auditory cortex. Cochlear representations were derived from a detailed model simulating auditory nerve activity, while colliculus and cortex representations were evaluated based on extensive two-photon calcium imaging data sets of 15,000 and 60,000 neurons, respectively. Using population vector analysis, we measured the similarity of firing rate-based representations for various sounds. We found that at all stages single frequency tone are encoded by well decorrelated (little overlapping) neural populations consistent with the segregated tonotopy observed throughout the auditory system. On the contrary, more complex sounds were increasingly decorrelated from cochlea to cortex. In particular, in cortex, temporal modulations started to be encoded in specific population activity patterns, independent of responses time courses. This suggests that the computations leading to cortical representations help segregating different complex sounds into distinct neural ensembles that could support distinct percepts. To start bridging these functional observations with causal mechanisms, we used optogenetics manipulations during an auditory discrimination task in mice. We showed that auditory cortex but not inferior colliculus can be bypassed by coarser, eventually faster pathways for simple pure tone discriminations. Yet, when the sensory decision was more complex, involving temporal integration of information, auditory cortex activity was required for sound discrimination and targeted activation of specific cortical ensembles changed perceptual decisions as expected from our read out of the cortical code. Together, our results suggest that auditory cortex represents complex sound features in more segregated neural ensembles which contribute to perceptual decisions by disentangling the more densely represented sub-cortical information.
Jonathan Pillow: New methods for identifying latent manifold structure from neural data
An important problem in neuroscience is to identify low-dimensional structure underlying noisy, high-dimensional spike trains. In this talk, I will discuss recent advances for tackling this problem in single and multi-region neural datasets. First, I will discuss the Gaussian Process Latent Variable Model with Poisson observations (Poisson-GPLVM), which seeks to identify a low-dimensional nonlinear manifold from spike train data. This model can successfully handle datasets that appear high-dimensional with linear dimensionality reduction methods like PCA, and we show that it can identify a 2D spatial map underlying hippocampal place cell responses from their spike trains alone. Second, I will discuss recent extensions to Poisson-spiking Gaussian Process Factor Analysis (Poisson-GPFA), which incorporates separate signal and noise dimensions as well as a multi-region model with coupling between latent variables governing activity in different regions. This model provides a powerful tool for characterizing the flow of signals between brain areas, and we illustrate its applicability using multi-region recordings from mouse visual cortex.
Jennifer Linden: The Mechanisms of Minding the Gap
Humans are remarkably sensitive to brief interruptions of ongoing sound. Thresholds for detection of brief silent gaps in noise are typically less than 6 ms in normal young adults. Gap-detection thresholds are often higher in older adults, patients with developmental disorders, or subjects with auditory processing difficulties; therefore gap-in-noise detection tasks are routinely used in audiological clinics to assess auditory temporal acuity. However, despite the simplicity of this task and its importance as a clinical tool, the neural mechanisms of gap detection are still poorly understood. Here I describe recent insights into the neural mechanisms of gap detection gained from studies of an unusual mouse model of gap-detection deficits. Neurophysiological data and computational modelling reveal that central auditory responses to sound offsets (disappearances) play a key role in defining the limits of gap-in-noise acuity. Additionally, adaptive gain control in higher auditory brain areas appears to increase gap-in-noise sensitivity. These results indicate that gap-in-noise detection relies not only on peripheral and brainstem mechanisms that produce precisely timed neural responses to sound offsets and onsets, but also on higher central auditory mechanisms of adaptation and intensity gain control. Thus, elevated gap-detection thresholds in patients with auditory perceptual difficulties could arise from abnormalities in many different auditory brain areas.