To­ward a More So­cial Ap­proach to AI Re­search

An international team of researchers, including members of TRR 318, argues that explainable AI must take social aspects into account. In their new book, „Social Explainable AI: Communications of NII Shonan Meetings“, they apply social concepts to explainable artificial intelligence and develop proposals for its practical implementation. The editors aim to promote both the development of human-centered technology and interdisciplinary discourse on the subject.

The field of explainable AI (XAI), which is currently the subject of intensive research, aims to empower users with transparent information so they can understand AI systems and either question or accept their decisions. Explainability is increasingly viewed as a fundamental prerequisite for fair, responsible, and trustworthy AI. “XAI aims to solve the problem of the opacity of AI systems and their acceptance, but social aspects are often left out because they are difficult for technology developers to grasp,” explains Prof. Dr. Katharina Rohlfing, spokesperson for TRR 318.

Explanations require interaction. Among humans, explanations are complex processes that follow patterns and rules, build on previous statements, and utilize various communication modalities simultaneously. “This means that explanations by and involving AI systems must also take the entire social context into account,” says Rohlfing. The diversity of relevant social aspects and the challenge of operationalizing them can deter many researchers from integrating the social dimension into XAI. Nevertheless, the pressure on research is growing: “It is becoming increasingly clear that explainable systems are of little use if they lack context or situational awareness and do not interact with users. Every user requires a different level of explanation.”

The book „Social Explainable AI“ (sXAI) introduces key concepts in the field, explores social interaction with explainable systems, and provides an introduction to relevant social concepts. Inspired by social interactions, the editors define sXAI as systems that adapt interactively to users in order to collaboratively develop satisfactory explanations. Furthermore, the publication discusses ethical issues related to sXAI. It is aimed at an interdisciplinary readership—not only technology developers but also researchers from socio-technical disciplines. “The book is an invitation to exchange ideas and further develop the sXAI field,” write the editors. 

The publication is the result of the 2023 Shonan Conference in Japan, where researchers from various disciplines and countries came together to explore how explanations can be tailored to the needs and goals of different users. The editorial team behind the publication, released by Springer, consists of Prof. Kary Främling (Umeå University, Sweden), Prof. Brian Lim (National University of Singapore), and TRR researchers Prof. Kirsten Thommes, Prof. Dr. Suzana Alpsancar, and Prof. Katharina Rohlfing. “Social Explainable AI” is now available as an English-language open-access book.

To the publication: 
Social Explainable AI: Communications of NII Shonan Meetings (Open Access)

To the Shonan Meeting 2023: 
Social Explainable AI: Designing Multimodal and Interactive Communication to Tailor Human-AI Collaborations