Explaining Explainability – the podcast on Explainable Artificial Intelligence

How does the act of explanation, done by humans, work – and how can it be applied to AI systems? These questions are being investigated by the Collaborative Research Center “Constructing Explainability” at the universities of Bielefeld and Paderborn.
In this podcast we bring together the different disciplines: Two researchers from different fields discuss a concept around Explainable Artificial Intelligence (XAI) from their point of view. The discussion is moderated by Professor Britta Wrede, professor for Medical Assistance Systems at Bielefeld University and an expert on Explainable Artificial Intelligence.
Where to listen
Episodes

In this first episode we enter the topic of explainability from the perspectives of computer science and linguistics.
The guests are the speakers of the Collaborative Research Center, Prof. Katharina Rohlfing, psycholinguist at Paderborn University and Prof. Philipp Cimiano, computer scientist at Bielefeld University. The insights they are giving are about the motivation that drives them, the research goals they are pursuing and the societal impacts of XAI.

What is the difference between explainability and explaining? How does the act of explanation, done by humans, work – and how can it be applied to AI systems? Why is it even of such great importance for XAI? Prof. Britta Wrede is discussing these and other questions again with Prof. Katharina Rohlfing, psycholinguist at Paderborn University and Prof. Philipp Cimiano, computer scientist at Bielefeld University.

How do you know when someone has understood something? And how can explainers adapt their approach to promote better understanding? In this episode, Prof. Britta Wrede discusses these questions with Prof. Hendrik Buschmeier, a computational linguist at Bielefeld University, and Prof. Heike Buhl, a psychologist at Paderborn University. (Episode in German; English transcript available.)

As TRR 318 understands it, explanations are not delivered in a one-way process - they are co-constructed, by the explainer and the explainee. And it seems that Large Language Models (LLMs) are pretty good at doing just that - but do they really co-construct? And what is co-construction? Prof Britta Wrede discusses these questions with two experts on LLMs, Prof Axel Ngonga Ngomo from Paderborn University and Prof Henning Wachsmuth from Leibniz Universität Hannover.
Stay updated
Don't miss out when new episodes are released.
Follow our Newsletter or our LinkedIn or Instagram accounts for updates!