LEARNING COGNITION-INSPIRED TASK MODELS FOR MOBILE ROBOTIC SKILLS
Relevant for Research Area
This project is part of the ELLIS PhD program.
Prof. Wolfram Burgard
Jun.-Prof. Abhinav Valada
Humans are able to process their environments on different levels of abstraction with respect to goals they have in mind. They are able to reason about their world in large big-picture steps, or in the specific when performing individual actions. Using these different levels, they are able to plan out the cleaning of a house, as well as perform the individual actions to complete the chores.
In this project we are exploring the problem of developing representations which allow robotic agents to exhibit the similar abstracted and specific reasoning capabilities that we can observe in humans. So far, neither classic symbolic representations, nor learned representations have been able to reproduce the great range of human reasoning and manipulation capabilities. Initially, we are investigating representations which encode the human-salient aspects of smaller tasks, so robotic agents can learn short-horizon skills for achieving such tasks from human demonstrations. Once the agents have acquired a set of these low-level skills, we are studying mechanisms which allow the agent to reason over its capabilities in the pursuit of larger goals. These involve the robotic agent moving about in their environments, requiring them to maintain a mental model of the subtasks they can no longer observe. We are looking to form reasoning capabilities which mimic human levels of abstraction, as we view these as essential in enabling robotic agents to successfully learn complicated behaviors from humans and in enabling universal human-robot-collaboration.
Ultimately, we see this project as making robotic aides in everyday human environments feasible, by providing the necessary cognitive framework to understand what humans want to achieve, and by developing the reasoning methods to imagine the steps to get there.