Have we already found a solution with ChatGPT and DeepSeek for an explainable AI that meets our requirements for an AI that can explain itself? Moderator Prof. Dr.-Ing. Britta Wrede discusses this question with her guests Prof. Dr. Henning Wachsmuth and Prof. Dr. Axel Ngonga Ngomo in the fourth episode of the podcast “Explaining Explainability”.
The experts shed light from different perspectives on the extent to which language models, also known as LLMs (Large Language Models), are already co-constructing, where their limits lie and how they can be further developed. Both project leaders will also present their research on co-construction in TRR 318. Listeners will learn, for example, how metaphors contribute to co-constructed explanation and which project is planned in the second funding phase of TRR 318.
Listen now: To the podcast