3rd TRR 318 Conference: Contextualizing Explanations (ContEx25)
As AI systems are used more and more in high-stakes domains, it also becomes ever-more important to make AI systems transparent to ensure meaningful human control and empower human users to contest or override AI-based decisions. Without sufficient transparency, increasingly complex and autonomous AI systems may leave users feeling overwhelmed and out of control, which is legally and ethically unacceptable, especially in the context of high-stakes decisions. For the users to feel empowered rather than out of control, explanations need to be relevant, providing sufficient information on which basis an output can be contested or challenged.
It has been increasingly noted by the XAI community that no one explanation can fit all needs. Further, recent approaches have advocated for a more participative approach to XAI in which users are not only involved but can directly shape and guide the explanations given by a certain AI System.
The 3rd TRR 318 Conference: Contextualizing Explanations is an international and interdisciplinary conference focusing on the question how explanations can be contextualized to increase their relevance and empower users.
Key research questions that we want to explore during the conference include:
1. How do contextual variables influence the effectiveness of explanations?
2. What are the relevant context factors to be taken into account in adapting an explanation to specific domains, users, or situations?
3. How can context be represented algorithmically to support contextual adaptation of XAI explanations?
4. What new architectures or approaches in XAI support the dynamic adaptation of explanations with respect to changing user needs?
5. How can user modelling support a more personalized explanation process?
6. In which ways can the dynamics of context be modelled?
7. What are actual examples or use cases of explanation processes and how does context influence the explanation process?
8. How can the suitability of contextually adapted explanations be studied / validated / evaluated?
9. Which explanation processes are particularly suitable for which context?
10. Which context-specific outcomes are influenced by explanations?
11. How can XAI empower users across diverse contexts to make informed decisions and effectively interact with AI systems?
12. What constitutes a useful taxonomy for categorizing contexts in which explanations are provided?
13. What are the various contexts in which explanations are provided and utilized?
Important Dates
Registration Period: April 7th - June 1st
Deadline for Submissions: April 16th (Extended Deadline)
Notification of Acceptance: May 16th
Conference: June 17th and 18th, Bielefeld
Invited Speakers
Angelo Cangelosi (University of Manchester)
Virginia Dignum (Umeå University)
Kacper Sokol (ETH Zurich)
The Importance of Starting Small with Baby Robots

Abstract:
Cognitive developmental robotics aims to develop robots capable of human-like learning, interaction, and behavior by grounding concrete and abstract concepts in sensorimotor experiences and social interactions. This talk introduced examples on language grounding in cognitive developmental robotics, and explores how principles like “starting small”, “embodied intelligence” and “super-embodiment” can address the limitations of AI tools, such as large language models (LLMs), which rely heavily on large datasets and lack sensorimotor grounding. By integrating incremental, multimodal learning and redefining embodiment to encompass physical, mental, and social processes, we can enable robots to better understand and utilize abstract concepts. The talk will also reflect on the pros and cons of using foundation models in cognitive robotics and consider research issues on explainable AI (XAI) and trust.
About the Speaker:
Angelo Cangelosi is Professor of Machine Learning and Robotics at the University of Manchester (UK) and co-director and founder of the Manchester Centre for Robotics and AI. He was selected for the award of the European Research Council (ERC) Advanced grant (funded by UKRI). His research interests are in cognitive and developmental robotics, neural networks, language grounding, human robot-interaction and trust, and robot companions for health and social care. Overall, he has secured over £40m of research grants as coordinator/PI, including the ERC Advanced eTALK, the UKRI TAS Trust Node and CRADLE Prosperity, the US AFRL project THRIVE++, and numerous Horizon and MSCAs grants. Cangelosi has produced more than 400 scientific publications. He is Editor-in-Chief of the journals Interaction Studies and IET Cognitive Computation and Systems, and in 2015 was Editor-in-Chief of IEEE Transactions on Autonomous Development. He has chaired numerous international conferences, including ICANN2022 Bristol, and ICDL2021 Beijing. His book “Developmental Robotics: From Babies to Robots” (MIT Press) was published in January 2015, and translated in Chinese and Japanese. His latest book “Cognitive Robotics” (MIT Press), coedited with Minoru Asada, was recently published in 2022 (Chinese translation in 2025).
Aligning Responsibility with Regulation: Bridging Technical Design and European Policy

Abstract:
The European Union’s approach to AI regulation focuses on transparency, accountability, and human oversight. Explainability is central to building responsible AI and influences both technical development and policy. This talk explores how explainability supports transparency, accountability, and human-centric values, all of which are key concerns in current EU debates on AI governance. Highlighting challenges and opportunities, I will outline how explainable AI can serve as a bridge between system design and societal expectations, ensuring that technological advancement is matched by ethical and legal responsibility.
About the Speaker:
Virginia Dignum is Professor of Responsible AI at Umeå University, Sweden, where she leads the AI Policy Lab. A Wallenberg Scholar and senior AI policy advisor, she chairs the ACM Technology Policy Council and is a Fellow of EURAI, ELLIS, and the Royal Swedish Academy of Engineering Sciences (IVA). She co-chairs the IEEE Global Initiative on AI Ethics and is an expert for UNESCO, OECD, and the Global Partnership on AI. She has advised the UN, EU, and WEF on AI governance and is a founder of ALLAI. Her upcoming book, The AI Paradox, is set for release in 2025.
Beyond XAI: Explainable Data-driven Modelling for Human Reasoning and Decision Support

Abstract:
Insights from social sciences have transformed explainable artificial intelligence from a largely technical into a more human-centred discipline, thus enabling diverse stakeholders, rather than technical experts alone, to benefit from its developments. The focus of explainability research itself, nonetheless, remained largely unchanged, that is to help people understand the operation and output of predictive models. This, however, may not necessarily be the most consequential function of such systems; they can be adapted to complement, augment and enhance the abilities of humans instead of (fully) automating their various roles in an explainable way. In this talk I will explore how we can reimagine XAI by drawing upon a broad range of relevant interdisciplinary findings. The resulting, more comprehensive conceptualisation of the entire research field promises to be better aligned with humans by supporting their reasoning and decision-making in a data-driven way. As the talk will show, medical applications, as well as other high stakes domains, stand to greatly benefit from such a shift in perspective.
About the Speaker:
Kacper is a researcher in the Medical Data Science group at ETH Zurich. His main research focus is transparency – interpretability and explainability – of data-driven predictive systems based on artificial intelligence and machine learning algorithms intended for medical applications. Before, he was a Research Fellow at the ARC Centre of Excellence for Automated Decision-Making and Society, affiliated with the RMIT University in Melbourne, Australia. Prior to that he held numerous research positions at the University of Bristol, United Kingdom, working on multiple diverse AI and ML projects. Kacper holds a Master's degree in Mathematics and Computer Science and a doctorate in Computer Science from the University of Bristol.
Program
Tuesday, 17th of June | Wednesday, 18th of June | |
9:00 - 10:30 | Invited Talk: Kacper Sokol (Chair: Philipp Cimiano) | Invited Talk: Angelo Cangelosi (Chair: Anna-Lisa Vollmer) |
10:30 - 11:00 | Coffee / Tea | Coffee / Tea |
11:00 - 13:00 | 2 Parallel Sessions Session 1 at Long Table “Explanations in Context: Foundations and Concepts” (Chair: Ingrid Scharlau) Context-Aware Explainability in AI-Powered Language Education: The CURIPOD (Dilsah Kalay) The Principal’s Principles: Actionable (Personalized) AI Alignment as Underexplored XAI (Kevin Baum, Richard Bergs, Holger Hermanns, Sophie Kerstan, Markus Langer, Anne Lauber-Rönsberg, Philip Meinel, Laura Stenzel, Sarah Sterz and Hanwei Zhang) Development of a Human Knowledge Integrated Workflow for Context-aware Machine Learning (Bahavathy Kathirgamanathan, Gennady Andrienko and Natalia Andrienko) Framing what and how to think: Lay people’s metaphors for algorithms (Philip Porwol, Miriam Körber, Friederike Kern, Carsten Schulte and Ingrid Scharlau) BFO-based Ontology of Context for Social XAI (Meisam Booshehri, Hendrik Buschmeier and Philipp Cimiano) Session 2 at Plenary Hall “User Studies in Explainability” (Chair: Hendrik Buschmeier) Co-Constructive Behavior of Large Language Models in Explanation Dialogues (Leandra Fichtel, Maximilian Spliethöver, Eyke Hüllermeier, Patricia Jimenez, Nils Klowait, Stefan Kopp, Axel-Cyrille Ngonga Ngomo, Amelie Robrecht, Ingrid Scharlau, Lutz Terfloth, Anna-Lisa Vollmer and Henning Wachsmuth) Cognitive and Interactive Adaptivity to the Explainee in an Explanatory Dialogue: An Experimental Study (Heike M. Buhl, Josephine B. Fisher and Katharina J. Rohlfing) Contextualizing Counterfactuals: Gender Differences in Alignment with Biased (X)AI (Ulrike Kuhl and Annika Bush) Explaining to and Being Explained by a Service Robot: Four HRI Studies Revisited Under a Framework for Explainability (Anna Belardinelli, Chao Wang, Daniel Tanneberg, Matti Krüger, Stephan Hasler and Michael Gienger) Contextualizing Explanations in Fluid Collaboration (Florian Schröder, Fabian Heinrich and Stefan Kopp) | 2 Parallel Sessions Session 5 at Long Table “Explaining ML Models” (Chair: Henning Wachsmuth) Assessing Intersectional Bias in Representations of Pre-Trained Image Recognition Models (Valerie Krug and Sebastian Stober) Contextualizing Explainability of Learning-Path Recommendations through Knowledge Graphs and Graph-based MDP (Hasan Abu-Rasheed and Christian Weber) Using SHAP for Feature Importance in Predicting Axelrod Tournament Winners (Oleksii Ignatenko and Dimitriy Yevchenko) Explainable Text Clustering in the Context of Psychological Research (Luis Klocke and Benjamin Paaßen) Session 6 at Plenary Hall “User Studies in Explainability” (Chair: Stefan Kopp) Human-Centered and Contextual Assessment of Human-AI Decision-Making Interventions (Seham Nasr and David Johnson) The Influence of Individual Traits and Stress on Expressions Related to Understanding and Confusion (Leonard Krause, Jonas Paletschek, David Johnson and Hanna Towards Co-Constructed Explanations: Multi-Agent Reasoning-Based Conversational System for Adaptive Explanations (Dimitry Mindlin, Meisam Booshehri and Philipp Cimiano) Socially-Aware Robot Explanations: Inferring Needs from Human Facial Expressions (Dimosthenis Kontogiorgos and Julie Shah) |
13:00 - 14:00 | Lunch | Lunch |
14:00 - 15:30 | Invited Talk: Virginia Dignum (Chair: Benjamin Paaßen) | Panel: Benjamin Paaßen, Katharina Rohlfing, Astrid Schomäcker, Sönke Sievers (14:00 - 16:00) |
15:30 - 16:00 | Coffee / Tea | |
16:00 -18:00 | 2 Parallel Sessions Session 3 at Long Table “Explaining ML Models” (Chair: Petra Wagner) Stability of Model Explanations in Interpretable Prototype-based Classification Learning (Subhashree Panda, Marika Kaden and Thomas Villm) Inherently Explainable Hierarchical Generalized Learning Vector Quantization Model (Adia Khalid and Benjamin Paaßen) Conceptual Metaphors on LLM-based NLI through Shapley Interactions (Meghdut Sengupta, Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier, Debanjan Ghosh and Henning Wachsmuth) Towards Symbolic XAI — Explanation through Human Understandable Logical Relationships Between Features (Thomas Schnake, Farnoush Rezaei Jafari, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon and Klaus-Robert Müller) Session 4 at Plenary Hall “Explanations in Context: Foundations and Concepts” (Chair: Friederike Kern) The Context as Resource: Contextual Adaptations and Explanations in the Field (Fabian Beer and Elena Esposito) Emerging Categories in Scientific Explanations (Giacomo Magnifico and Eduard Barbu) Context-dependent Effects of Explanations on Multiple Layers of Trust (Eda Ismail-Tsaous, Matthias Uhl, Celine Spannagl, Sebastian Krügel and Ute Schmid) A Framework for Context-aware XAI in Human-AI Collaboration (Judith Knoblach, Bettina Finzel and Ute Schmid) |
Access

The conference takes place in the Center for Interdisciplinary Research (ZiF) at Bielefeld University.
Address: Methoden 1
33615 Bielefeld
Germany
By train:
Bielefeld Hbf, then take tram line subway/tram line 4 (destination Universität or Lohmannshof, approx. 7 minutes). From the tram stop Universität or Bültmannshof you can reach ZiF by walking up the hill behind the main building of the university. During the day, a bus goes from Bielefeld main station to ZiF (lines 61 to Werther/Halle or 62 to Borgholzhausen); the exit stop is Universität/Studentenwohnheim.
Taxis are always available directly in front of the main station (it takes approx. 10 minutes from the main station to ZiF). The fare to the university is currently around 16 euros.
By car:
From the north:
Motorway A2: Exit Bi-Ost, Detmolder Str. direction Zentrum (6 km, approx. 10 min). Route via Kreuzstr., Oberntorwall, Stapenhorststr., Wertherstr. until ZiF is signposted.
From the south:
Motorway A2: At the Bielefeld junction, take the A33 towards Bi-Zentrum, exit at Bi-Zentrum, follow the signs to the city centre on Ostwestfalendamm (B61), exit at Universität, follow Stapenhorststr., Wertherstr. until ZiF is signposted.
By plane:
Next Airports:
Paderborn/Lippstadt & Hannover
Düsseldorf
(approx. 190 km from Bielefeld)
From Düsseldorf airport, the Skytrain will take you to the train station in about 5 minutes.
Depending on which train you take (direct or with a change in Hamm or Duisburg), the journey takes between 1 1/2 to 2 hours.
Hanover
(about 110 km to Bielefeld)
The S-Bahn line S5 takes you from Hanover Airport to the main railway station in Hanover city centre in 12 minutes.
Intercity trains from Hanover to Bielefeld run every hour. Project duration approx. 50-60 minutes (without changing trains).
Paderborn/Lippstadt
Thanks to its central and very convenient location on the K37, in the middle of East Westphalia between the cities of Paderborn (20 km) and Lippstadt (26 km), Paderborn-Lippstadt Airport can be reached quickly and easily from all directions.
Dortmund
(about 110 km to Bielefeld)
You can either take a free shuttle bus from Dortmund Airport to Holzwickede, and from there a regional train to Bielefeld,
changing in Hamm (time about 1 hour) or take a shuttle bus to Dortmund Main Station (project duration: 25 minutes) and
from there a through train to Bielefeld (project duration: less than an hour).
Frankfurt am Main
(approx. 320 km from Bielefeld)
There are Intercity connections from Frankfurt Airport to Bielefeld (departure every hour with change in Cologne or Hanover -
journey time approx. 4 hours).
Cologne-Bonn
(about 200 km to Bielefeld)
Intercity trains run every hour from Cologne-Bonn Airport to Bielefeld, approximate project duration 2 1/2 hours.
Organizing Committee
Program Committee
Name | Institution | Role |
Philipp Cimiano | Bielefeld University | Chair |
Benjamin Paaßen | Bielefeld University | Chair |
Anna-Lisa Vollmer | Bielefeld University | Chair |
Jose M. Alonso-Moral | CiTIUS, Universida de Santiago de Compostela | Ordinary Member |
Zach Anthis | University College London | Ordinary Member |
Kevin Baum | German Research Center for Artificial Intelligence | Ordinary Member |
Rafael Berlanga | Universitat Jaume I | Ordinary Member |
Heike Buhl | Paderborn University | Ordinary Member |
Alejandro Catala | CiTUS, University of Santiago de Compostela | Ordinary Member |
Alina Deriyeva | Bielefeld University | Ordinary Member |
Elena Esposito | Bielefeld University | Ordinary Member |
Peter Flach | University of Bristol | Ordinary Member |
Barbara Hammer | Bielefeld University | Ordinary Member |
Eyke Hüllermeier | LMU Munich | Ordinary Member |
Friederike Kern | Bielefeld University | Ordinary Member |
Adia Khalid | Bielefeld University | Ordinary Member |
Stefan Kopp | Bielefeld University | Ordinary Member |
Aida Kostikova | Bielefeld University | Ordinary Member |
Marco Matarese | Istituto Italiano di Tecnologia | Ordinary Member |
Tim Miller | The University of Queensland | Ordinary Member |
Sebastian Müller | Universität Bonn | Ordinary Member |
Katharina Rohlfing | Paderborn University | Ordinary Member |
Ingrid Scharlau | Paderborn University | Ordinary Member |
Eva Schmidt | TU Dortmund | Ordinary Member |
Kacper Sokol | ETH Zürich | Ordinary Member |
Timo Speith | Universität Bayreuth | Ordinary Member |
Philipp Vaeth | Technical University of Applied Sciences Wuerzburg-Schweinfurt | Ordinary Member |
Henning Wachsmuth | Leibniz University Hannover | Ordinary Member |
General questions go to conference@trr318.uni-paderborn.de,
media enquiries to communication@trr318.uni-paderborn.de.
Links
Further information on the conference