3rd TRR 318 Con­fe­rence: Con­tex­tu­a­li­zing Ex­pla­na­ti­ons (Con­tEx25)

As AI systems are used more and more in high-stakes domains, it also becomes ever-more important to make AI systems transparent to ensure meaningful human control and empower human users to contest or override AI-based decisions. Without sufficient transparency, increasingly complex and autonomous AI systems may leave users feeling overwhelmed and out of control, which is legally and ethically unacceptable, especially in the context of high-stakes decisions. For the users to feel empowered rather than out of control, explanations need to be relevant, providing sufficient information on which basis an output can be contested or challenged.

It has been increasingly noted by the XAI community that no one explanation can fit all needs. Further, recent approaches have advocated for a more participative approach to XAI in which users are not only involved but can directly shape and guide the explanations given by a certain AI System.

The 3rd TRR 318 Conference: Contextualizing Explanations is an international and interdisciplinary conference focusing on the question how explanations can be contextualized to increase their relevance and empower users.

Key research questions that we want to explore during the conference include:

1. How do contextual variables influence the effectiveness of explanations?

2. What are the relevant context factors to be taken into account in adapting an explanation to specific domains, users, or situations?

3. How can context be represented algorithmically to support contextual adaptation of XAI explanations?

4. What new architectures or approaches in XAI support the dynamic adaptation of explanations with respect to changing user needs?

5. How can user modelling support a more personalized explanation process?

6. In which ways can the dynamics of context be modelled?

7. What are actual examples or use cases of explanation processes and how does context influence the explanation process?

8. How can the suitability of contextually adapted explanations be studied / validated / evaluated?

9. Which explanation processes are particularly suitable for which context?

10. Which context-specific outcomes are influenced by explanations?

11. How can XAI empower users across diverse contexts to make informed decisions and effectively interact with AI systems?

12. What constitutes a useful taxonomy for categorizing contexts in which explanations are provided?

13. What are the various contexts in which explanations are provided and utilized?

Im­port­ant Dates

Registration Period: April 7th - June 1st

Deadline for Submissions: April 16th (Extended Deadline) 

Notification of Acceptance: May 7th

Conference: 17th and 18th of June, Bielefeld

In­vited Speak­ers

Angelo Cangelosi (University of Manchester)

Virginia Dignum (Umeå University)

Kacper Sokol (ETH Zurich)

The Im­port­ance of Start­ing Small with Baby Ro­bots

Abstract:
Cognitive developmental robotics aims to develop robots capable of human-like learning, interaction, and behavior by grounding concrete and abstract concepts in sensorimotor experiences and social interactions. This talk introduced examples on language grounding in cognitive developmental robotics, and explores how principles like “starting small”, “embodied intelligence” and “super-embodiment” can address the limitations of AI tools, such as large language models (LLMs), which rely heavily on large datasets and lack sensorimotor grounding. By integrating incremental, multimodal learning and redefining embodiment to encompass physical, mental, and social processes, we can enable robots to better understand and utilize abstract concepts. The talk will also reflect on the pros and cons of using foundation models in cognitive robotics and consider research issues on explainable AI (XAI) and trust. 

About the Speaker:
Angelo Cangelosi is Professor of Machine Learning and Robotics at the University of Manchester (UK) and co-director and founder of the Manchester Centre for Robotics and AI. He was selected for the award of the European Research Council (ERC) Advanced grant (funded by UKRI). His research interests are in cognitive and developmental robotics, neural networks, language grounding, human robot-interaction and trust, and robot companions for health and social care. Overall, he has secured over £40m of research grants as coordinator/PI, including the ERC Advanced eTALK, the UKRI TAS Trust Node and CRADLE Prosperity, the US AFRL project THRIVE++, and numerous Horizon and MSCAs grants. Cangelosi has produced more than 400 scientific publications. He is Editor-in-Chief of the journals Interaction Studies and IET Cognitive Computation and Systems, and in 2015 was Editor-in-Chief of IEEE Transactions on Autonomous Development. He has chaired numerous international conferences, including ICANN2022 Bristol, and ICDL2021 Beijing. His book “Developmental Robotics: From Babies to Robots” (MIT Press) was published in January 2015, and translated in Chinese and Japanese. His latest book “Cognitive Robotics” (MIT Press), coedited with Minoru Asada, was recently published in 2022 (Chinese translation in 2025).

Align­ing Re­spons­ib­il­ity with Reg­u­la­tion: Bridging Tech­nic­al Design and European Policy

Abstract: 
The European Union’s approach to AI regulation focuses on transparency, accountability, and human oversight. Explainability is central to building responsible AI and influences both technical development and policy. This talk explores how explainability supports transparency, accountability, and human-centric values, all of which are key concerns in current EU debates on AI governance. Highlighting challenges and opportunities, I will outline how explainable AI can serve as a bridge between system design and societal expectations, ensuring that technological advancement is matched by ethical and legal responsibility.

About the Speaker:
Virginia Dignum is Professor of Responsible AI at Umeå University, Sweden, where she leads the AI Policy Lab. A Wallenberg Scholar and senior AI policy advisor, she chairs the ACM Technology Policy Council and is a Fellow of EURAI, ELLIS, and the Royal Swedish Academy of Engineering Sciences (IVA). She co-chairs the IEEE Global Initiative on AI Ethics and is an expert for UNESCO, OECD, and the Global Partnership on AI. She has advised the UN, EU, and WEF on AI governance and is a founder of ALLAI. Her upcoming book, The AI Paradox, is set for release in 2025.

Bey­ond XAI: Ex­plain­able Data-driv­en Mod­el­ling for Hu­man Reas­on­ing and De­cision Sup­port

Abstract: 
Insights from social sciences have transformed explainable artificial intelligence from a largely technical into a more human-centred discipline, thus enabling diverse stakeholders, rather than technical experts alone, to benefit from its developments. The focus of explainability research itself, nonetheless, remained largely unchanged, that is to help people understand the operation and output of predictive models. This, however, may not necessarily be the most consequential function of such systems; they can be adapted to complement, augment and enhance the abilities of humans instead of (fully) automating their various roles in an explainable way. In this talk I will explore how we can reimagine XAI by drawing upon a broad range of relevant interdisciplinary findings. The resulting, more comprehensive conceptualisation of the entire research field promises to be better aligned with humans by supporting their reasoning and decision-making in a data-driven way. As the talk will show, medical applications, as well as other high stakes domains, stand to greatly benefit from such a shift in perspective.

About the Speaker:
Kacper is a researcher in the Medical Data Science group at ETH Zurich. His main research focus is transparency – interpretability and explainability – of data-driven predictive systems based on artificial intelligence and machine learning algorithms intended for medical applications. Before, he was a Research Fellow at the ARC Centre of Excellence for Automated Decision-Making and Society, affiliated with the RMIT University in Melbourne, Australia. Prior to that he held numerous research positions at the University of Bristol, United Kingdom, working on multiple diverse AI and ML projects. Kacper holds a Master's degree in Mathematics and Computer Science and a doctorate in Computer Science from the University of Bristol.

Re­gis­tra­tion

Register for the Conference by June 1st.

Participation is free of charges.

Pro­gram

 Tuesday, June 17Wednesday, June 18
9:00 - 10:30Kacper Sokol (Invited Talk)Angelo Cangelosi (Invited Talk)
10:30 - 11:00 Coffee / TeaCoffee / Tea
11:00 - 13:00 Contributed Talks (6) Contributed Talks (6) 
13:00 - 14:00LunchLunch
14:00 - 15:30 Virginia Dignum (Invited Talk)Panel
15:30 - 16:00Coffee / Tea 
16:00 -18:00Contributed Talks (6)  
19:00 - 22:00Dinner with Invited Speakers, Panelists and TRR PIs 

Ac­cess

The conference takes place in the Center for Interdisciplinary Research (ZiF) at Bielefeld University.

Address: Methoden 1
                33615 Bielefeld
                Germany

By train: 
Bielefeld Hbf, then take tram line subway/tram line 4 (destination Universität or Lohmannshof, approx. 7 minutes). From the tram stop Universität or Bültmannshof you can reach ZiF by walking up the hill behind the main building of the university. During the day, a bus goes from Bielefeld main station to ZiF (lines 61 to Werther/Halle or 62 to Borgholzhausen); the exit stop is Universität/Studentenwohnheim.

Taxis are always available directly in front of the main station (it takes approx. 10 minutes from the main station to ZiF). The fare to the university is currently around 16 euros.

By car: 
From the north:
Motorway A2: Exit Bi-Ost, Detmolder Str. direction Zentrum (6 km, approx. 10 min). Route via Kreuzstr., Oberntorwall, Stapenhorststr., Wertherstr. until ZiF is signposted.

From the south:
Motorway A2: At the Bielefeld junction, take the A33 towards Bi-Zentrum, exit at Bi-Zentrum, follow the signs to the city centre on Ostwestfalendamm (B61), exit at Universität, follow Stapenhorststr., Wertherstr. until ZiF is signposted.

By plane:
Next Airports:
Paderborn/Lippstadt & Hannover

Düsseldorf
(approx. 190 km from Bielefeld)

From Düsseldorf airport, the Skytrain will take you to the train station in about 5 minutes.
Depending on which train you take (direct or with a change in Hamm or Duisburg), the journey takes between 1 1/2 to 2 hours.

Hanover
(about 110 km to Bielefeld)

The S-Bahn line S5 takes you from Hanover Airport to the main railway station in Hanover city centre in 12 minutes.
Intercity trains from Hanover to Bielefeld run every hour. Project duration approx. 50-60 minutes (without changing trains).


Paderborn/Lippstadt
Thanks to its central and very convenient location on the K37, in the middle of East Westphalia between the cities of Paderborn (20 km) and Lippstadt (26 km), Paderborn-Lippstadt Airport can be reached quickly and easily from all directions.
 

Dortmund
(about 110 km to Bielefeld)

You can either take a free shuttle bus from Dortmund Airport to Holzwickede, and from there a regional train to Bielefeld,
changing in Hamm (time about 1 hour) or take a shuttle bus to Dortmund Main Station (project duration: 25 minutes) and
from there a through train to Bielefeld (project duration: less than an hour).
 

Frankfurt am Main
(approx. 320 km from Bielefeld)

There are Intercity connections from Frankfurt Airport to Bielefeld (departure every hour with change in Cologne or Hanover -
journey time approx. 4 hours).
 

Cologne-Bonn
(about 200 km to Bielefeld)

Intercity trains run every hour from Cologne-Bonn Airport to Bielefeld, approximate project duration 2 1/2 hours.

Or­gan­iz­ing Com­mit­tee

Pro­gram Com­mit­tee

Name

Institution

Role

Philipp Cimiano Bielefeld University Chair
Benjamin Paaßen Bielefeld University Chair
Anna-Lisa Vollmer Bielefeld University Chair
Jose M. Alonso-Moral CiTIUS, Universida de Santiago de Compostela Ordinary Member
Zach Anthis University College London Ordinary Member
Kevin Baum German Research Center for Artificial Intelligence  Ordinary Member
Rafael Berlanga Universitat Jaume I Ordinary Member
Heike Buhl Paderborn University Ordinary Member
Alejandro Catala CiTUS, University of Santiago de Compostela Ordinary Member
Alina Deriyeva Bielefeld University Ordinary Member
Elena Esposito Bielefeld University Ordinary Member
Peter Flach University of Bristol Ordinary Member
Barbara Hammer Bielefeld University Ordinary Member
Eyke Hüllermeier LMU Munich Ordinary Member
Friederike Kern Bielefeld University Ordinary Member
Adia Khalid Bielefeld University Ordinary Member
Stefan Kopp Bielefeld University Ordinary Member
Aida Kostikova Bielefeld University Ordinary Member
Marco Matarese Istituto Italiano di Tecnologia Ordinary Member
Tim Miller The University of Queensland Ordinary Member
Sebastian Müller Universität Bonn Ordinary Member
Katharina Rohlfing Paderborn University Ordinary Member
Ingrid Scharlau Paderborn University Ordinary Member
Eva Schmidt  TU Dortmund  Ordinary Member
Kacper Sokol ETH Zürich Ordinary Member
Timo Speith Universität Bayreuth Ordinary Member
Philipp Vaeth Technical University of Applied Sciences Wuerzburg-Schweinfurt Ordinary Member
Henning Wachsmuth Leibniz University Hannover Ordinary Member

 

General questions go to conference@trr318.uni-paderborn.de,

media enquiries to communication@trr318.uni-paderborn.de.

Links