ServiceBots

Brain Controlled Service Robots


Relevant for Research Area

C - Applications

The project builds on

NeuroBots

PIs

Prof. Wolfram Burgard

Jun.-Prof. Joschka Boedecker

Jun.-Prof. Abhinav Valada


Summary

In this research, the principles of interaction between the brain and novel autonomous robotic systems will be investigated. More specifically, robotic systems controlled by brain-machine interfaces will be developed to perform service tasks for paralyzed users. In this context, the focus will be on the following research problems:

Learning New Robotic Skills from Multimodal Brain Signal Feedback An important target group of these systems are severely paralyzed patients who have no other means of feedback on the activities performed by the robots, therefore novel methods will be developed to integrate the decoded brain signals into the learning process of new tasks as well as for adaptation of existing robotic abilities. In addition to the information decoded from brain activity, peripheral data such as heart rate or skin resistance will be measured and integrated. First, we will use our techniques for learning user preferences to further improve the selection of possible actions. The goal here is to arrange actions and parameters according to the learned preferences for the temporal sequence of actions and the use of different objects. We expect that this will further reduce the number of steps needed to select objects. The simulation of a home environment will allow us to demonstrate the robot controlled by brain signals in typical application scenarios. For example, we will be able to investigate activities that involve more complex tasks, such as opening a refrigerator or room door and removing objects from the refrigerator. In this context, we are particularly interested in the question which actions are necessary to be able to grasp a certain object in a given situation and how these can be effectively determined using machine learning techniques. As a further perspective, this project will investigate the extent to which it is possible to provide the user of the system with a smooth transition between high-level and low-level control. This would enable the user to control the robot completely on the level of the individual motors if desired, e.g. to demonstrate new solutions. This work integrates all components belonging to the MBI including the mobile manipulators, EEG system and computing resources.

Learning for Human-Robot Interaction Previously, AIS within the BrainLinks-BrainTools cluster of excellence has developed a brain-controlled drinking assistant in which a robot can fetch, pour and serve a drink to the user. The robot uses force sensors to sense the pressure when the user comes into contact with the drink and can control it appropriately. This type of fine-grained interaction requires constant contact with the human but must not involve excessive application of force, which is a major challenge for conventional control techniques. This also requires accurate estimation of the position of the human and his body parts. At the same time, such applications potentially offer considerable added value, especially for severely paralyzed patients in need of assistance. In this project, we will extend this work to a robotic system to other control applications, e.g. to realize a scratch assistant - a capability according to patient caregivers, is an important assistance function especially for severely paralyzed patients. The existing work will be expanded for reliable visual perception of humans and their poses in 3D. To this end, the techniques for segmenting body parts will be extended to the three-dimensional surface and an optimal strategy for control will be determined with the help of reinforcement learning. The respective intentions of the user will be determined and feedback from the user's brain signals will be incorporated into the learning process. This project integrates a variety of components including mobile manipulation platforms, decoding of brain data and robust human perception.

Joint Learning of Navigation and Manipulation Tasks In addition, we will investigate several fundamental problems in learning-based robotics to enable efficient and adaptive behavior of service robots. This will include learning of simultaneous navigation and manipulation tasks. Thus far, these two tasks are almost exclusively learned separately and assume that the robot does not move relative to the object while performing the grasping movement. Here, we will investigate how a robot can pick up and place the object down while navigating. Such a capability is a critical requirement as it can significantly increase the efficiency and minimize the waiting time for users. As a large amount of interaction data is required for learning, an approach with several identical robots that collect and aggregate data in parallel will be exploited along with the motion capture system that will provide information about the pose of the robots and the objects that it interacts with. This will enable robots to learn important navigation and manipulation tasks in the everyday environment within a reasonable time. The processing of the large amounts of data in reinforcement learning and the real-time control of the robots also require an extensive amount of computing for which the IMBIT resources will be used.


Research Status

In this project, the principles of interaction between the brain and novel autonomous robotic systems will be investigated. More specifically, robotic systems controlled by brain-machine interfaces will be developed to perform service tasks for paralyzed users. An important target group of these systems are severely paralyzed patients who have no other means of feedback on the activities performed by the robots, therefore novel methods will be developed to integrate the decoded brain signals into the learning process of new tasks as well as for adaptation of existing robotic abilities.

In a first step to this goal, we developed a system that allows a robot to learn new skills from own experience while receiving interactive feedback from a user. Complex skills have been learned in real world setting requiring only one our of training with easy to provide evaluative and corrective feedback. For more details visit the project website at http://ceiling.cs.uni-freiburg.de

Given the low dimensionality of the required feedback, in the future we will aim to encode the user intention and preferences from their brain signals to incorporate them into the learning process as an alternative form of feedback.As an additional perspective, this project will investigate the extent to which it is possible to provide the user of the system with a smooth transition between high-level control as in the previously presented drinking assistant, and low-level control. This would enable the user to control the robot completely on the level of the individual motors if desired, e.g. to demonstrate new solutions via the developed interactive feedback approach.

We are further investigating how a robot can perform mobile manipulation skills, e.g. pick up and place the object down while navigating. Such a capability is a critical requirement as it can significantly increase the efficiency and minimize the waiting time for users. As a large amount of interaction data is required for learning, an approach with several identical robots that collect and aggregate data in parallel will be exploited along with the motion capture system that will provide information about the pose of the robots and the objects that it interacts with. The processing of the large amounts of data in reinforcement learning and the real-time control of the robots also require an extensive amount of computing for which the IMBIT resources will be used. This work hence integrates all components belonging to the Maschine-Brain Interface large equipment (MBI) including the mobile manipulators, EEG system and computing resources.

Publications

Eugenio Chisari, Tim Welschehold, Joschka Boedecker, Wolfram Burgard, Abhinav Valada. Correct Me if I am Wrong: Interactive Learning for Robotic Manipulation. arXiv Preprint arXiv:2110.03316, 2021 [pdf]