Language for Talking about Explanatory Processes
Explanation is at the heart of TRR 318 research, meaning that the terms used by TRR researchers in their daily work also need a quick explanation.
Co-construction refers to the interactive and iterative process in which partners jointly negotiate both the explanandum and the form of understanding for explanations. By sequentially building on, refining, and modifying each other’s contributions, mutual participation is achieved, guided by scaffolding and monitoring. This process enables both partners to actively work towards a shared explanatory goal: while the explanation emerges on the microlevel of interaction, it is also crucially modulated on the macrolevel.
An explainee is the individual or group who receives an explanation. This term refers specifically to those who are intended to understand a concept or information being conveyed. In a classroom setting, for instance, when a teacher explains a concept, they are the explainer, and the students are the explainees, as they are the ones receiving and processing the information.
An explainer is the individual or entity responsible for delivering an explanation. This agent guides the process of clarification and understanding by presenting information, concepts, or ideas in a manner that is intended to enhance the understanding of the explainee. In a classroom setting, for instance, the teacher acts as the explainer, facilitating learning by conveying knowledge and answering questions.
An explanandum is the entity, event, or phenomenon that is the focus of an explanation. It refers to what is being clarified or understood through the explanatory process. In the context of artificial intelligence, the explanandum might be the decision made by a credit scoring model, the classification of an image as a “cat,” or a recommendation for a specific movie based on a user’s preferences.
Monitoring is a multimodal process in which observed outcomes are compared to predicted results. Partners utilize speech, gestures, and nonverbal cues to track the progress of their joint tasks. The explainer evaluates the explainee’s understanding to determine whether the explanation is effective or needs refinement. In turn, the explainee also monitors the explainer, gauging the appropriate level of detail required for a particular explanation.
In educational and developmental psychology literature, scaffolding is a concept that describes how an expert provides guidance to a learner by adjusting the level of assistance based on the learner’s performance. In TRR research, this concept is adapted from the field of education and psychology to apply it to explanation in artificial intelligence encounters. Both partners in the interaction can scaffold each other, meaning they provide one another with the necessary information to collaboratively construct both the explanandum and the desired form of understanding. Alongside monitoring, scaffolding serves not only as a form of guidance but also as a means of supervision, facilitating the active participation of both partners in the learning process.
A partner model is a key resource for “placing” explanations and encompasses knowledge and assumptions about the explainee regarding their role in the dialog, general characteristics, and specific attributes. It represents the mental model that a person has of another individual to whom they are explaining. This model includes prior knowledge, assumptions about the explainee’s understanding, and their general traits. In the context of explainable artificial intelligence, the partner model is developed by the system through its interactions with the user, enabling tailored explanations that align with the user’s needs and level of comprehension.
In the context of explainable AI, understanding refers to the relevance of information provided to users of AI systems. Unlike the current debate that focuses on simply giving "enough information," the TRR’s approach emphasizes that understanding should be tailored to what is important for the user.
The TRR researchers differentiate between two key concepts: enabledness and comprehension.
- Enabledness relates to how explanations help users make choices or take actions.
- Comprehension involves a deeper awareness that allows users to form a broader understanding of a phenomenon beyond what is immediately obvious.
Systems designed to be inherently understandable and transparent. Following the social framework on explanations within the TRR 318, they enable users to actively participate in the explanation process. While these systems provide valuable insights, they often fall short in facilitating the active co-construction of context with users, which can limit their adaptability and interactivity.
Following the social framework on explanations within the TRR 318, these are systems that actively and cooperatively generate explanations for specific phenomena or decisions, placing a strong emphasis on the involvement of the explainee. This way, relevant context and factors alongside the explainee can be brought into the interaction and, thus, become co-constructed. The TRR 318 focuses on advancing these explaining systems, addressing existing gaps in the current XAI literature through enhanced contextual adaptability and a more interaction-oriented modeling approach.