NEUROTECHNOLOGICAL HUMAN-ROBOT INTERACTION
PIs
Prof. Tonio Ball
Jun.-Prof. Dr. Joschka Bödecker
Jun.-Prof. Dr. Abhinav Valada
Summary
In this research, the principles of interaction between the brain and novel autonomous robotic systems will be investigated. More specifically, robotic systems controlled by brain-machine interfaces will be developed to perform service tasks [1]. This proposal is the core of the IMBIT MBI major equipment scientific program.
An important future target group of these systems are severely paralyzed patients who have no other means of feedback on the activities performed by the robots, therefore novel methods will be developed to integrate the decoded brain signals into the learning process of new tasks as well as for adaptation of existing robotic abilities. Besides the challenges on the robotic aspects of this endeavor, one major obstacle to the application in everyday life is the preparation of the EEG measurement, which was used in our previous NeuroBots setup. With gel-free electrodes, time can be saved considerably, but in our experience the signal quality and wearing comfort are poor. In contrast, optically pumped magnetometers (OPM) for measuring the MEG offer a highly interesting alternative because they are contactless and immediately ready for use and have better physical properties than non-invasive EEG and MEG based on Superconducting Quantum Interference Devices (SQUIDs).
Previously, this project was funded by BrainLinks-BrainTools under the name ServiceBots. In the first year, we focused on ordering the equipment mentioned in the MBI major equipment scientific program and we developed a system [2] that allows a robot to learn new skills from its own experience while receiving interactive feedback from a user. Complex skills have been learned in a real-world setting requiring only one hour of training with easy to provide evaluative and corrective feedback. For more details visit the project website at http://ceiling.cs.uni-freiburg.de. Given the low dimensionality of the required feedback, in the next step we aim to decode the user intention and preferences from their brain signals to incorporate them into the learning process as an alternative form of feedback. As an additional perspective, this project will investigate the extent to which it is possible to provide the user of the system with a smooth transition between high-level control as in the previously presented drinking assistant [2], and low-level control (“sliding autonomy”). This would enable the user to control the robot completely on the level of the individual motors if desired, e.g., to demonstrate new solutions via the developed interactive feedback approach.
We will further investigate how a robot can perform mobile manipulation skills, e.g., pick up and place the object down while navigating. Such a capability is a critical requirement as it can significantly increase efficiency and minimize the waiting time for users. As a large amount of interaction data is required for learning, an approach with several identical robots that collect and aggregate data in parallel will be exploited along with the motion capture system that will provide information about the pose of the robots and the objects that it interacts with. The processing of the large amounts of data in reinforcement learning and the real-time control of the robots also require an extensive amount of computing for which the IMBIT resources will be used.
This work hence integrates all components belonging to the Machine-Brain Interface large equipment (MBI) including the mobile manipulators, motion capture and EEG systems, as well as the computing resources.