After four years of intensive research, the Transregional Collaborative Research Centre 318 ‘Constructing Explainability’ is taking stock at the end of its first funding phase by the German Research Foundation (DFG). In this interview, the two spokespersons, Professor Dr Katharina Rohlfing and Professor Dr Philipp Cimiano, share key insights.
Computer scientist Meisam Booshehri, a doctoral researcher in the INF project of TRR 318, received the Best Demo Award at the 21st International Conference on Semantic Systems in Vienna.
How can a robot best support us linguistically in solving tasks? Researchers in subproject A05 investigated this question. They developed a model that allows the Nao robot to select appropriate verbal explanation strategies depending on how the human has behaved previously and what cognitive state they are likely to be in.
TRR 318 invites you to the GENIALE Science Festival at the Wissenswerkstadt Bielefeld. On November 22, visitors can gain insights into Transgregio's research through workshops, games, and experiments.
When people talk about abstract topics, they often use linguistic imagery. Such metaphors make complex content easier to understand and provide clues as to how a topic is perceived. Prof. Dr. Ingrid Scharlau and Philip Porwol describe how metaphors shape our communication in an appealing way in their guidebook “Metaphors for Explaining”.
In their latest study, Tobias Peters and Professor Dr. Ingrid Scharlau of Project C01 examined how people deal with incorrect recommendations from AI systems and whether deliberately fostering distrust can improve users’ performance when using these systems. The key finding: calling for skepticism does not improve performance; instead, it tends to worsen it.
Can artificial intelligence work with doctors as equals and help them to make better diagnoses? A research team from the C05 project at TRR 318 is looking into this question. The computer scientists are working on an interactive system that assists doctors in making a diagnosis and checks and weighs up assumptions in dialog with them.
TRR 318 will be represented at the 2025 World Conference on Explainable Artificial Intelligence (XAI) in Istanbul from July 9 to 11. The conference brings together international experts every year to discuss the latest developments in XAI. This year, researchers from projects A03 and C02 will present their current work.
On 17 and 18 June, around 80 researchers with international and interdisciplinary perspectives came together for the TRR 318 conference “Contextualizing Explanations” in Bielefeld. The invited speakers and visitors discussed the importance of context in explanatory situations.
How do children learn their first words and how can care robots acquire the skills to help patients? In the fifth episode of the podcast “Explaining Explainability”, experts shed light on the concept of scaffolding - the art of supporting learners so that they can master complex tasks independently - and apply it to explanatory situations.
The current issue of the TRR 318 newsletter explores the role of context in AI explanations, aligning with the theme of the third TRR 318 conference “Contextualizing Explanations”, scheduled for June 17 and 18 at Bielefeld University.
Researchers from the INF project have developed a model that aims to predict the optimal prompts. This should improve the generated output of language models. Maximilian Spliethöver presented the TRR 318 study at the NAACL 2025 conference in New Mexico, USA.
Hubert Baniecki is a PhD student at the University of Warsaw and was a visiting researcher in project C03 of TRR 318 at the LMU Munich in March. In the interview, he shares what he researched with the TRR members, what made the collaboration special for him, and the impressions he took away.
Have we already found a solution with ChatGPT and DeepSeek for an explainable AI that meets our requirements for an AI that can explain itself? Moderator Prof. Dr.-Ing. Britta Wrede discusses this question with her guests Prof. Dr. Henning Wachsmuth and Prof. Dr. Axel Ngonga Ngomo in the fourth episode of the podcast “Explaining Explainability”.