Artificial intelligence (AI) is now encountered by people in almost all areas of life. AI is supposed to support, advise and provide objective and optimized solutions for difficult decision-making. But how can people understand why and how an AI has arrived at its respective result? In the Collaborative Research Centre/Transregional Research Centre (SFB/TRR) 318 of Bielefeld and Paderborn Universities, an interdisciplinary research team is dealing with this problem: the goal is to develop understandable assistance systems that can, for example, answer users' queries and provide precise explanations. In the new research podcast, the researchers give an insight into their work and explain why explainable AI represents an important step into the future of AI research.
Researchers from six disciplines - computer science, linguistics, media studies, psychology, sociology and economics - are conducting research in the SFB on the process of interactive explanation and how this can be transferred to AI systems. The podcast "Explaining Ex-plainability" brings all disciplines together: In it, host Professor Dr Britta Wrede talks to two researchers from different disciplines about their current research, challenges, areas of application and the relevance of Explainable AI.
The first episode, whose editorial concept was developed as part of a master's thesis in the Interdisciplinary Media Studies program, approaches the topic of explainability from the perspectives of computer science and linguistics. The speakers of the SFB, Professor Dr. Philipp Cimiano (computer science) from Bielefeld University and Professor Dr. Katharina Rohlfing (psycholinguistics) from Paderborn University, will be guests. What does explainability actually mean, why and in which areas is it so important? Where does research currently stand - and what are the greatest challenges?
How can highly technical processes be presented in a way that everyone can understand?
"What makes AI research at the TRR so special is the focus on the person to whom something is being explained," says Philipp Cimiano. "How can highly technical processes and functions be rendered in a generally understandable way? Contrasts, comparisons and gradations can shed light on this. For example, if we want to explain the decision-making process through a machine-learned model, then the properties of the case in which a decision is made, so-called features, play an important role. The decision can be explained by making the importance of each feature in the decision understandable." In the podcast, the researchers use concrete examples to explain what their research is about and give insights into their everyday research.
Further episodes will follow every two months. The next podcast episode will be about the process of everyday explanations: What has linguistics found out so far about the structures of everyday explanations - and how can computer science process these findings for the design of AI systems?
More information: