Projects
A total of 22 project leads, supported by some 40 researchers at Bielefeld University and Paderborn University from a wide range of fields spanning from linguistics, psychology, and media studies to sociology, economics, philosophy and computer science are investigating the co-construction of explanations.
The areas of research included in TRR 318 are divided into three categories: (A) “Explaining process”; “(B) “Explanation as social practice”; and (C) “Representing and computing explanations”. These three areas in turn are subdivided into interdisciplinary subprojects.
Project INF provides an overarching research structure; Project WIKO facilitates public relations and outreach; Project Z addresses administrative, organizational, and financial matters; and the RTG Project provides a framework for educating doctoral and post-doctoral researchers.
Overview
Area A - Explaining process
| Abbreviation | Name |
| A01 | Adaptive explanation generation |
| A02 | Monitoring the understanding of explanations |
| A03 | Co-constructing explanations between AI-explainer and human explainee under arousal or nonarousal |
| A04 | Co-constructing duality-enhanced explanations |
| A05 | Contextualized and online parametrization of scaffolding in human–robot explanatory dialog |
| A06 | Explaining the multimodal display of stress in clinical explanations |
Area B - Explanation as social practice
| Abbreviation | Name |
| B01 | A dialog-based approach to explaining machine learning models |
| B03 | Exploring users, roles, and explanations in real-world contexts |
| B05 | Co-Constructing explainability with an interactively learning robot |
| B06 | Ethics and normativity of explainable AI |
| B07 | Communicative practices of requesting information and explanation from LLM-based agents |
Area C - Representing and computing explanations
| Abbreviation | Name |
| C01 | Explanations for healthy distrust in large language models |
| C02 | Interactive learning of explainable, situation-adapted decision models |
| C03 | Interpretable machine learning: Explaining change |
| C04 | Metaphors as an explanation tool |
| C05 | Creating explanations in collaborative human-machine knowledge exploration |
| C06 | Technically enabled explanation of speaker traits |
| C07 | Co-construction-following large language models for explaining |
Intersecting projects
Associated projects
| Title | Description |
| Development of symmetrical mental models | Independent research group at Paderborn University |
| Human-Centric Explainable AI | Independent research group at Bielefeld University |