Responsible AI

Responsibility as a key concept for anchoring AI innovation to human rights, ethics, and human flourishing.

AI and related digital technologies have become a disruptive force in our societies and the calls for ethical frameworks and regulation have become louder. The research group Responsible AI holds that responsibility is a key concept for anchoring AI innovation to human rights, ethics and human flourishing. As an interdisciplinary working group they have been working on this topic at the University of Freiburg since 2018. Prof. Dr. Wolfram Burgard (Speaker of BrainLinks-BrainTools and coordinator of the ELLIS unit Freiburg), Dr. Philipp Kellmeyer and Prof. Dr. Oliver Müller (both members of BrainLinks-BrainTools), and Prof. Dr. Silja Vöneky.

For more information about running projects please visit the website responsible-ai.org

In June 2020 the team organizes a Virtual Conference: Global Perspectives on Responsible AI to discuss some of the most pressing technological, philosophical, ethical and legal challenges of AI and AI systems for the next decade from a global and transdisciplinary perspective. To this end the organizers welcome researchers, scholars, experts from various fields, and lawmakers to take part in a virtual conference and to exchange thoughts and ideas about fundamental and specific key elements of responsible AI. The exchange with participants from different continents (Africa, Asia, Australia, USA, and Europe) and from different disciplines (AI, computer science, medicine, neurosciences, philosophy, and law) shall give an opportunity to find a common basis and new answers to pressing questions of AI governance and regulation.