Understanding the neuronal substrates of internal models of the world

Growing evidence suggests that the brain predicts its inputs and builds models of the world such as to best match its predictions to reality. While the idea of mental simulation dates back as early as Plato, and is echoed by modern theories of cortical function, little is known about how the brain builds internal models, and predicts upcoming inputs. During behavior, movements are good predictors of sensory inputs: where we look determines what we see. We leverage olfactory closed-loop behaviors in mice and large scale functional imaging and electrode recordings to identify neuronal circuits that mediate egocentric predictions of future sensory inputs given specific motor actions. We aim to understand:
Where and how are sensorimotor predictions represented?
How are predictions compared to sensory inputs to trigger prediction errors?
How are internal models updated, given persistent errors?


3D_optical_monitoring and manipulation_pic

What features of odorants and their neuronal representations are relevant for olfactory perception?

The relationship between perceived odor quality, the underlying spatial-temporal patterns of activity in the brain and the physical-chemical space of odors remains elusive. The realization that color perception is based on three types of cone photoreceptors enabled the invention of cameras and displays that faithfully reproduce any natural stimulus by mixing a basis set of just three lights. In the case of smell, we lack any comparable conceptual understanding. To a large degree, we still do not understand what properties of odorants lead to particular percepts, and how these properties are represented in the neuronal activity. We aim to identify the basis set of olfactory perception. 
To determine what features in glomerular activity patterns are captured by the brain, we combine odor stimulation with patterned illumination methods and recordings from downstream circuits. Employing artificial optogenetic stimuli informs on detection thresholds and resolution limits of the system. Combining odors with precise manipulations, we test our predictions during odor guided behaviors.

Reformatting and decoding distinct features of sensory information

How is information relating to odor intensity, identity, position and valence, all entangled at the level of glomerular inputs disambiguated, ultimately leading to specific, context dependent actions? Using patterned illumination and various recording techniques, we study integration rules between the bulb and its targets (anterior olfactory nucleus, piriform cortex, olfactory striatum, etc.), and observe how information is routed and made use of. We also capitalize on innovative DNA barcoding strategies (MAPseq and BARseq) for neuronal labeling such as to obtain connectivity statistics and determine the functional diversity of bulbar and cortical outputs.

Long range feedforward-feedback interplay: Cross-talk between feedforward and feedback signals across different brain areas may enable key computations ranging from extracting fine sensory features in complex environments to generating predictions on incurring stimuli, and serving as substrates for planning and execution of motor actions. Despite overwhelming evidence for massive top-down projections, the specificity and logic of interplay between feedforward and feedback neural signals between early sensory processing areas and the cortex remains poorly understood. This is due both to a conceptual bias favoring the prevalence of feedforward hierarchical processing, as well as to technical limitations in assessing the effects of cortical feedback. We investigate the function of such long-range circuits in the mammalian olfactory system.