Multimodal Co-Construction of Explanations with XAI Workshop

Multimodal Co-Construction of Explanations with XAI Workshop | San José | November 04 2024

 

Organisers: Hendrik Buschmeier, Stefan Kopp, Teena C. Hassan.

This workshop aims to bring together two growing and increasingly important research strands. On the one hand, Explainable AI (XAI) is a flourishing field concerned with developing methods to make modern machine learning-based AI systems transparent and “scrutable” for their stakeholders (developers, users, policy makers). Current systems provide seemingly superior solutions to highly complex problems while relying on so-called black-box models, such as deep neural network architectures. As a result, the field has begun to develop XAI methods that aim to provide human-understandable explanations of system decisions or behavior. However, such explanations are often generated ad hoc, for individual decisions, and are primarily aimed at developers. Enabling naive users to understand current AI systems and thus empowering them to act and use such technology in an informed, self-directed, and responsible manner is still a challenge. On the other hand, research on multimodal interaction has made significant progress towards meaningful interaction and communication between human users and artificial systems or agents. This field has developed sophisticated methods for processing social signals, generating expressive communicative behavior, or enabling multimodal dialogue between users and AI agents.

With this workshop, we aim to foster scientific exchange and cross-fertilization between these two fields, which we believe is much needed and of great mutual benefit. Currently, XAI is largely based on either visualization or language-based representation (using LLMs). At the same time, first approaches to explainable autonomous robots (XAR) or embodied agents raise the need for situated and multimodal forms of explanation. Moreover, it has already been argued that explanations are produced through an interactive and social process co-constructed by the explainer (the XAI system) and the explainee (the human user). The communicative means for carrying out this interactive process (e.g., conversational speech, facial expressions, gestures, feedback, interactive repair, turn-taking) are inherently multimodal and require the development and application of advanced methods for processing multimodal behavior and interaction with a dedicated focus on explanations. The expected outcomes of the workshop are the identification of key research lines and approaches that will be documented in the workshop proceedings, improved networking between researchers in XAI and ICMI, and a better understanding of XAI as a multimodal, interactive co-construction challenge.

 

The workshop is held at ACM ICMI 2024 in a hybrid format, allowing online participation. More information about the workshop be found here.

Important dates:

  • Wokshop date: November 04