Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Bildinformationen anzeigen
Bildinformationen anzeigen
Bildinformationen anzeigen
Bildinformationen anzeigen
Bildinformationen anzeigen

Publikationen

Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems

The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee’s understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee’s understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.

Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier, H., Esposito, E, Grimminger, A, Hammer, B., Häb-Umbach, R., Horwath, I., Hüllermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A., Schulte, C., Wachsmuth, H., Wagner, P. & Wrede, B. (2021). Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Transactions on Cognitive and Developmental Systems 13(3), 717-728. doi: 10.1109/TCDS.2020.3044366

Modeling Feedback in Interaction With Conversational Agents - A Review

Intelligent agents interacting with humans through conversation (such as a robot, embodied conversational agent, or chatbot) need to receive feedback from the human to make sure that its communicative acts have the intended consequences. At the same time, the human interacting with the agent will also seek feedback, in order to ensure that her communicative acts have the intended consequences. In this review article, we give an overview of past and current research on how intelligent agents should be able to both give meaningful feedback toward humans, as well as understanding feedback given by the users. The review covers feedback across different modalities (e.g., speech, head gestures, gaze, and facial expression), different forms of feedback (e.g., backchannels, clarification requests), and models for allowing the agent to assess the user's level of understanding and adapt its behavior accordingly. Finally, we analyse some shortcomings of current approaches to modeling feedback, and identify important directions for future research.

Axelsson A., Buschmeier H. and Skantze G. (2022) Modeling Feedback in Interaction With Conversational Agents - A Review. Front. Comput. Sci. 4:744574. doi: 10.3389/fcomp.2022.744574

(De)Coding Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems

Advances in the development of AI and its application in many areas of society have given rise to an ever-increasing need for society’s members to understand at least to a certain degree how these technologies work. Where users are concerned, most approaches in Explainable Artificial Intelligence (XAI) assume a rather narrow view on the social process of explaining and show an undifferentiated assessment of explainees’ understanding, which mostly are considered passive recipients of information. The actual knowledge, motives, needs and challenges of (lay)users in algorithmic environments remain mostly missing. We argue for the consideration of explanation as a social practice in which explainer and explainee co-construct understanding jointly. Therefore, we seek to enable lay users to document, evaluate, and reflect on distinct AI interactions and correspondingly on how explainable AI actually is in their daily lives. With this contribution we want to discuss our methodological approach that enhances the documentary method by the implementation of ‘digital diaries’ via the mobile instant messaging app WhatsApp – the most used instant messaging service worldwide. Furthermore, from a theoretical stance, we examine the socio-cultural patterns of orientation that guide users’ interactions with AI and their imaginaries of the technologies – a sphere that is mostly obscured and hard to access for researchers. Finally, we complete our paper with empirical insights by referring to previous studies that point out the relevance of perspectives on explaining and understanding as a co-constructive social practice.

Finke, J., Horwath, I., Matzner, T., Schulz, C. (2022). (De)Coding Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. doi: https://doi.org/10.1007/978-3-031-05643-7_10

Agnostic Explanation of Model Change based on Feature Importance

Explainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem of explaining model change, i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.

Muschalik, M., Fumagalli, F., Hammer, B., Hüllermeier E. (2022). Agnostic Explanation of Model Change based on Feature Importance. Künstl Intell. doi: 10.1007/s13218-022-00766-6