HippoSLAM

LEARNING MULTISENSORY INTEGRATION FOR NEURAL CIRCUITS MODELING


Relevant for Research Area

C - Applications

 

 

 

PIs

Prof. Dr. Christian Leibold

Jun.-Prof. Abhinav Valada


Summary

Higher-level cognitive functions such as navigation and decision making rely on neuralrepresentations of abstract state spaces combining sensory, contextual, emotional, andmnemonic information. Many of the neural circuits involved in such functions are reasonably welldescribed and for some of the relevant computations there even exist specific physiologicallygrounded models and hypotheses. A major problem for testing these models is that they act onhighly processed sensory derived information, which usually restricts evaluations to simple toyexamples since a full-scale simulation of real-world situations would require combining powerful(multi-)sensory processing algorithms with the neural circuit hypotheses. Here, we propose tocarry out this endeavor for testing hippocampal circuit models in real-world navigational tasks.Combining the expertise of Dr. Valada on multisensory SLAM in robotics applications [1-6] andDr. Leibold on multisensory spatial codes in the hippocampus [7-10], we will establish a roboticplatform that allows testing of hippocampal circuit hypotheses in realistic virtual and realenvironments. As a first step, we will use established sensory frontends [16,5] and combinethem with established hippocampus-motivated circuit models [7] in a simulated environment,particularly focussing on the function of preplay sequences [11,12] in the hippocampus. Inprevious work, it was suggested that these sequences could serve as a substrate for pathintegration in real and abstract cognitive spaces [7], however, the applicability of this hypothesisto real-world situations is missing.

Previous approaches of using hippocampal place coding ideas to steer robots [13] have alreadyled to some interesting proof of principle results, particularly they allowed testing the benefits ofmultimodal integration [14], but were constrained to a limited description of neural activity, andnever had the intention to learn about brain function but rather to explore new possibilities toimprove robot navigation. In the meanwhile, both circuit knowledge about the hippocampus androbot learning have dramatically evolved requiring substantial new efforts to combine thesefields. A recent study [15] applied a deep neural network-based vision system to an agentnavigating in virtual reality using reinforcement learning and found that neural activity in thehidden layers exhibited some similarity with spatial activity patterns in the hippocampalformation. The neural architecture used in this study was, however, not very well-grounded inneural anatomy and physiology and thus it remained difficult to interpret these results from aneuroscience perspective. The approach in our project is to use machine learning methods tolearn a sensory representation which will then be used for more biologically motivated models(particularly implementing temporal activity features) to operate on.


Previous Related Publications and general references

[1] Cattaneo D, Vaghi M, Valada A. Lcdnet: Deep loop closure detection and point cloud registration forlidar slam. IEEE Transactions on Robotics. 2021 Jan 11.

[2] Younes A, Honerkamp D, Welschehold T, Valada A. Catch Me If You Hear Me: Audio-VisualNavigation in Complex Unmapped Environments with Moving Sounds. arXiv preprint arXiv:2111.14843.2021 Nov 29.

[3] Mittal M, Mohan R, Burgard W, Valada A. Vision-based autonomous UAV navigation and landing forurban search and rescue. Proc. of the International Symposium on Robotics Research. 2019 Jun 4.

[4] Boniardi F, Valada A, Mohan R, Caselitz T, Burgard W. Robot localization in floor plans using a roomlayout edge extraction network. In 2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) 2019 Nov 3 (pp. 5291-5297). IEEE.

[5] Radwan N, Valada A, Burgard W. Vlocnet++: Deep multitask learning for semantic visual localizationand odometry. IEEE Robotics and Automation Letters. 2018 Sep 10;3(4):4407-14.

[6] Valada A, Radwan N, Burgard W. Deep auxiliary learning for visual localization and odometry. In 2018IEEE international conference on robotics and automation (ICRA) 2018 May 21 (pp. 6939-6946). IEEE.

[7] Leibold C. A model for navigation in unknown environments based on a reservoir of hippocampalsequences. Neural Networks. 2020 Apr 1;124:328-42.

[8] Haas OV, Henke J, Leibold C, Thurley K. Modality-specific subpopulations of place fields coexist in thehippocampus. Cerebral Cortex. 2019 Mar 1;29(3):1109-20.

[9] Mankin EA, Thurley K, Chenani A, Haas OV, Debs L, Henke J, Galinato M, Leutgeb JK, Leutgeb S,Leibold C. The hippocampal code for space in Mongolian gerbils. Hippocampus. 2019Sep;29(9):787-801.

[10] Fetterhoff D, Sobolev A, Leibold C. Graded remapping of hippocampal ensembles under sensoryconflicts. Cell Reports. 2021 Sep 14;36(11):109661.

[11] Dragoi G, Tonegawa S. Preplay of future place cell sequences by hippocampal cellular assemblies.Nature. 2011 Jan;469(7330):397-401.

[12] Grosmark AD, Buzsáki G. Diversity in neural firing dynamics supports both rigid and learnedhippocampal sequences. Science. 2016 Mar 25;351(6280):1440-3.

[13] Milford MJ, Wyeth GF, Prasser D. RatSLAM: a hippocampal model for simultaneous localization andmapping. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA'04.2004 2004 Apr 26 (Vol. 1, pp. 403-408). IEEE.

[14] Struckmeier O, Tiwari K, Pearson MJ, Kyrki V. Vita-slam: Biologically-inspired visuo-tactile slam. arXivpreprint arXiv:1904.05667. 2019 Apr 11.

[15] Banino A, Barry C, Uria B, Blundell C, Lillicrap T, Mirowski P, Pritzel A, Chadwick MJ, Degris T,Modayil J, Wayne G. Vector-based navigation using grid-like representations in artificial agents. Nature.2018 May;557(7705):429-33.

[16] Chancán M, Milford M. Deepseqslam: A trainable cnn+ rnn for joint global description andsequence-based place recognition. arXiv preprint arXiv:2011.08518. 2020 Nov 17.