Ex­plain­ing Ex­plain­ab­il­ity – the pod­cast on Ex­plain­able Ar­ti­fi­cial In­tel­li­gence

How does the act of explanation, done by humans, work – and how can it be applied to AI systems? These questions are being investigated by the Collaborative Research Center “Constructing Explainability” at the universities of Bielefeld and Paderborn.

In this podcast we bring together the different disciplines: Two researchers from different fields discuss a concept around Explainable Artificial Intelligence (XAI) from their point of view. The discussion is moderated by Professor Britta Wrede, professor for Medical Assistance Systems at Bielefeld University and an expert on Explainable Artificial Intelligence.

Epis­odes

In this first episode we enter the topic of explainability from the perspectives of computer science and linguistics.

The guests are the speakers of the Collaborative Research Center, Prof. Katharina Rohlfing, psycholinguist at Paderborn University and Prof. Philipp Cimiano, computer scientist at Bielefeld University. The insights they are giving are about the motivation that drives them, the research goals they are pursuing and the societal impacts of XAI.

Listen in at Spotify | Web | YouTube

What is the difference between explainability and explaining? How does the act of explanation, done by humans, work – and how can it be applied to AI systems? Why is it even of such great importance for XAI? Prof. Britta Wrede is discussing these and other questions again with Prof. Katharina Rohlfing, psycholinguist at Paderborn University and Prof. Philipp Cimiano, computer scientist at Bielefeld University.

Listen in at: Spotify | Web | YouTube

How do you know when someone has understood something? And how can explainers adapt their approach to promote better understanding? In this episode, Prof. Britta Wrede discusses these questions with Prof. Hendrik Buschmeier, a computational linguist at Bielefeld University, and Prof. Heike Buhl, a psychologist at Paderborn University. (Episode in German; English transcript available.)

Listen in at: Spotify | Web | YouTube

As TRR 318 understands it, explanations are not delivered in a one-way process - they are co-constructed, by the explainer and the explainee. And it seems that Large Language Models (LLMs) are pretty good at doing just that - but do they really co-construct? And what is co-construction? Prof. Britta Wrede discusses these questions with two experts on LLMs, Prof. Axel Ngonga Ngomo from Paderborn University and Prof. Henning Wachsmuth from Leibniz Universität Hannover.

Listen in at: Spotify | Web | Youtube

At TRR 318, scaffolding is recognized as a crucial aspect of explaining. It involves supporting learners, whether they are children acquiring their first words or robots interacting with caregivers. In the fifth episode of the podcast "Explaining Explainability," Prof. Britta Wrede engages with experts Dr. Angela Grimminger from Paderborn University and Prof. Anna-Lisa Vollmer from Bielefeld University. They explore how scaffolding functions in various contexts, such as explanatory situations, and its role in their TRR 318 projects.

Listen in at: Spotify | Web | Youtube

In this episode, TRR 318 speakers Prof. Katharina Rohlfing from Paderborn University and Prof. Philipp Cimiano from Bielefeld University reflect on the highlights, challenges, and surprises of the first funding phase. Together with moderator Prof. Britta Wrede, they discuss how their perspective on co-construction has evolved over the past four years and share their outlook on the next steps and future directions for TRR 318.

Listen in at: Spotify | Web | Youtube

At TRR 318, monitoring human understanding plays a key role in improving explanations. It addresses the challenge of recognizing whether an explanation truly meets the listener’s needs — a task in which Artificial Intelligence often struggles. In the latest episode of the podcast “Explaining Explainability,” Prof. Britta Wrede talks with Prof. Hanna Drimalla and Prof. Friederike Kern, both from Bielefeld University. Together, they discuss the challenges of monitoring for AI, highlight its significance, and provide insights into how their research can enhance future monitoring approaches.

Listen in at: Spotify | Web | Youtube

Stay up­dated

Don't miss out when new episodes are released.

Follow our Newsletter or our LinkedIn or Instagram accounts for updates!