INTERPRETABLE AND TRUSTWORTHY AI SYSTEMS IN MEDICAL RESEARCH AND HEALTHCARE
Relevant for Research Area
C - Applications
The project is funded by the Baden-Württemberg Foundation.
Prof. Dr. Tonio Ball
Jun.-Prof. Dr. Joschka Boedecker
Prof. Dr. Wolfram Burgard
Prof. Dr. Oliver Müller
Prof. Dr. Silja Vöneky
The alignment of several digital technologies-big data, smart sensors, artificial neural networks for deep learning, high-performance computing and other advances-enable a new generation of 'AI systems'.
In medical research and health care, AI systems promise substantial advances in diagnosing, predicting and treating diseases. For AI-based systems to become a success and real advantage in health care, however, they need to be interpretable and trustworthy in the sense that they comply with ethical and legal rules and society's high standards for technological risk assessment. A systematic, thorough, and exhaustive comparison of existing methods for the interpretation of deep learning networks to assess their usefulness in the context of a clinical AI system is needed.
To address this important gap in research, AI-TRUST will develop a deep-learning-based assistive system for EEG diagnosis (DeepEEG) based on interpretable deep-learning tools. In an interdisciplinary team, we will also address the main challenge for ensuring an ethically embedded and value-sensitive design of AI systems in medicine. The aim is to find solutions that considers the values, rights and concerns of key stakeholders (i.e. patients, doctors) in terms of the desired functions, level of safety, and security, non-violation of rights and values (as the autonomy of the patient), and others. Together, these efforts will substantially promote the development of transparent and ethical AI systems in medicine.
More info available at the official project website at responsible-ai.org/ai-trust/