Robots4SocialGood

FROM LEARNING TO RELEARNING ALGORITHMIC FAIRNESS FOR DETERRING BIASED OUTCOMES IN SOCIALLY-AWARE ROBOT NAVIGATION


Relevant for Research Area

B - Core Technologies

C - Applications


Summary

Humans have the ability to learn and relearn in situations where we adapt to physical changes in the environment, as well as adapt to mitigate actions to diminish prejudiced or inequitable conduct. Relearning is a natural process that is essential for humans to appropriately function in the real world and coexist in society. In this project, we study mechanisms to prevent unfair outcomes in socially-aware robot navigation so that robots are provided with strategies to adapt their trajectories based on both the physical and social requirements. We are investigating the key elements that are required to articulate both technological and social fields to mitigate socially-biased outcomes while learning socially-aware navigation strategies. With this interdisciplinary perspective, we are exploring the social implications of including fairness measurements in the learning techniques, considering ethical elements that pay attention to underrepresented groups of people as a support task for the initiatives for inclusion and fairness in AI and robotics. By including social context into the learning processes of robot navigation, we aim to make it feasible to detect potentially harmful or unintended outcomes in early stages, as a preemptive measure against dangerous situations that may occur after deployment. This would lead to the rise of robots that positively influence society through the projection of more equitable social relationships, roles, and dynamics.