Independent research group: Human-Centric Explainable AI

The research group "Human-Centric Explainable AI," led by Dr. David Johnson, focuses on co-construction and is closely connected to the TRR. Its goal is to better understand how people process explanations provided by AI-supported decision-making systems in sensitive situations, such as assessing mental health. The group examines which types of explanations and interactions are most effective in preventing excessive trust in inaccurate or unreliable AI recommendations.
The group aims to design XAI tools and interactions that actively involve users in the development process. In addition, it conducts extensive testing to learn how people engage with these explanations. Based on these insights, new and improved AI-supported decision-making systems will be developed for mental health assessment and other real-world challenges.
To achieve this goal, the group takes an interdisciplinary approach. It combines methods from computer science, psychology, and human-computer interaction to address all key aspects of the research.
Research areas
Comuter Science, psychology
Staff
Seham Nassr, Bielefeld University