Reinforced Holography

ESTABLISHING HOLOGRAPHIC MODULATION OF NEURONAL CELL POPULATIONS IN VIVO WITH REINFORCEMENT LEARNING PREDICTIONS


Relevant for Research Area

C - Applications

 

 

 

PIs 

Jun.-Prof. Dr. Joschka Bödecker

Prof. Dr. Ilka Diester


Summary

There is considerable heterogeneity in the activity patterns among neurons, especially in higher cortical areas. To better understand the function of this functional diversity it is therefore necessary to dissect neural activity patterns during behavior on the level of individual neurons. However, experimentally addressing the behavioral contributions of functionally distinct, but spatially intermingled neurons, has been challenging due to the difficulty of perturbing the activity of individual neurons in a precise spatiotemporal manner in vivo. Here, we propose to overcome this challenge by using an inverse reinforcement learning approach to generate predictions based on the analysis of neural activity patterns of 2-photon imaging data. In this approach, we first extract a functional representation of the factors driving observed animal behavior, and link it to the measured neural activity in a second step. We then use this representation to make precise predictions about expected behavioural changes based on simulated trials with perturbed neuronal activity. These predictions will enable us to guide single-neuron targeted optogenetic manipulations using computer generated 2-photon holography. This setup (Ultima 2Pplus + NeuraLight 3D, purchased via the large equipment proposal INST 39/1161-1FUGB), will be used to determine how manipulations of individual neurons affect the network. This will provide insights into the local micro-circuitry and the power of individual neurons.

More concretely, we will perform experiments with mice running in a virtual environment in a setup with different virtual rooms. Via 2-photon calcium imaging, we will record the activity of hundreds of neurons during behavior and thereby be able to probe the relevance of individual neurons as well as cell assemblies in our behavioural task. Different assemblies are likely to form and rearrange while navigating though the different virtual rooms. To further probe the impact of specific individual neurons on the activity of target cell populations, it is relevant to identify cells of particular importance for the behavior. Building on a recent technology which we developed, we will train a computational inverse reinforcement learning algorithm to predict the behaviour of the subjects based on their neuronal activity (Kalweit et al., 2021). This technology is based on Inverse Q-learning (Kalweit et al., 2020) and will allow us to identify particularly relevant neurons for specific task aspects (e.g. correct choice, impatience measured as responses before the cue, late responses etc.). Based on these predictions we will then generate a spatiotemporal stimulation sequence that will allow to target the identified cells via our holographic optogenetic stimulation unit. We will develop novel and improved ways of fast data handling, extraction and analysis to create a nearly closed loop situation in which we will be able to read neural activity via 2P imaging, extract activity patterns and create predictions via Inverse Q learning, and generate spatial masks for the holographic stimulation. With this approach we will be able to causally test the role of individual neurons and address the question of the critical threshold of perturbed neurons. We may even observe effects on behavior caused by photostimuli, i.e. behavioural biases that are predictable based on the selectivity of the perturbed neuronal population, even though photostimulation and behavioural responses are temporarily separated.


Previous Related Publication

Kalweit G., Kalweit M., Alyahyay M., Jaeckel Z., Steenbergen F., Hardung S., Diester I. and Boedecker J. (2021) NeuRL: Closed-form Inverse Reinforcement Learning for Neural Decoding. ICML 2021 Workshop on Computational Biology.

Kalweit G., Huegle M., Werling M. and Boedecker J. (2020) Deep Inverse Q-learning with Constraints. Advances in Neural Information Processing Systems 33 (NeurIPS 2020)