Area A Explaining A01 Adaptive explanation generation A02 Monitoring the understanding of explanations A03 Co-constructing explanations with emotional alignment between AI-explainer and human explainee A04 Integrating the technical model into the partner model in explanations of digital artifacts A05 Contextualized and online parametrization of attention in human–robot explanatory dialog A06 Co-Constructing social signs of understanding to adapt monitoring to diversity Area B Social practice B01 A dialog-based approach to explaining machine learning models B03 Exploring users, roles, and explanations in real-world contexts B05 Co-constructing explainability with an interactively learning robot B06 Ethics and Normativity of Explainable Artificial Intelligence Area C Representing and computing explanations C01 Healthy distrust in explanations C02 Interactive learning of explainable, situation-adapted decision models C03 Interpretable machine learning: Explaining change C04 Metaphors as an explanation tool C05 Creating explanations in collaborative human–machine knowledge exploration C06 Technically enabled explaining of speaker traits Intersecting project… Toward a framework for assessing explanation quality Intersecting project Ö Questions about explainable technology Intersecting project Z Central administrative project Intersecting project… Integrated Research Training Group