Workshop@ICDL 2022, London, Sept 12: Understanding scaffolding mechanisms better to advance research on (Explainable) AI

About the workshop

As a concept, scaffolding is known from developmental studies. Examples of it occur in asymmetric interactions, in which caregiver support their children in achieving some goals, actions or tasks without which children would not have the skills to achieve them (Wood, Bruner, & Ross, 1976). Scaffolding can be provided nonverbally or verbally, by drawing or direction attention to specific aspects of the task, or by marking critical features.
Explainable AI (XAI) has the goal to make an artificial system controllable by humans (Clancey, 1983). For this purpose, they are designed in an introspectable manner. More recent developments on XAI recognize that for AI to become introspectable to its users, an interaction is helpful, with which users are guided to aspects that are relevant for them. It is exactly this context of designing XAI for which scaffolding is highly relevant (Rohlfing et al., 2021). Scaffolding is especially relevant when targeting interactive explanations that can be guided by the user’s needs of understanding (Sokol & Flach, 2020). Any explanation starts with asymmetry of knowledge: One partner knows more, and the other less. The goal is to share the knowledge.

Research on AI has acknowledged the value and high potential of scaffolding for learning. However, whereas scaffolding is broadly recognized as a form of assistance, the dynamics of the adaptation process is barely investigated. On this workshop, thus, the scholars are invited to view scaffolding as developing within ongoing interaction. After introductory presentations on state of the art, further presentations will shed light on how to account for task and partner (both, explainer and explainee) models and how specific strategies might foster the adaptation process during an ongoing interaction. This way, the workshop contributes to a broader perspective on scaffolding and new potentials that can be used for the design of Explainable AI. 






Introductory session



Scaffolding in human development

Angela Grimminger (Psycholinguistics)


Scaffolding in human–robot interaction

Anna-Lisa Vollmer (Computer Science)


Coffee break



Project presentations



Minimal design and tech for investigating scaffolding

Shuntaro Okazaki (Computer Science)


Cognitive and interactive adaptivity as a driving force of scaffolding

Josephine B. Fisher, Erick Ronoh, Katharina J. Rohlfing, Heike Buhl (Psycholinguistics, Psychology)


Scaffolding explanations through multimodal feedback

Stefan Lazarov, Angela Grimminger (Psycholinguistics)


Lunch break



Negation as a scaffolding strategy

K. Rohlfing, Amit Singh, Ngoc Chi Banh & I. Scharlau (Psychology, Psycholinguistics)


Emotional scaffolding

Britta Wrede, Kirsten Thommes, Christian Schütze, Olesja Lammert, B. Richter (Computer Science, Management Science)


Final discussion



Scaffolding behavior has a multifaceted property (e.g., sharing goal, the exaggerated verbal/nonverbal instruction, and direct support in some cases). Here, I introduce our previous study focusing on the minimal component of the scaffolding in an imitation learning situation and the analytical technique to disentangle it. This will help understand what the scaffolding is and provide insights to develop XAI at the first step.

In dyadic explanations, interlocutors adapt cognitively as well as verbally to one another as the interaction unfolds. Crucial for cognitive adaptation is the so-called partner model, which refers to the process of inferring the partner’s mental states. Initial partner models are formed via stereotypes, cues or prior assumptions and continuously get updated during the interaction. To reveal how these two forms of adaptivity serve scaffolding, we hypothesize that specific patterns of cognitive adaptivity will co-occur with verbal adaptivity.

Can explainees’ reactions scaffold the structure of explanations during the moment of conversation? In our study, we investigate whether explainees’ multimodal behavior can trigger a change in the episodic structure of doctor’s explanations of an upcoming surgery. Adapting previous research on explanation episodes (Roscoe & Chi, 2007), we analyze patients’ multimodal behavior (gaze, gestures and backchannelling) during transitions between different blocks of explanations.

Whereas scaffolding is mostly comprised as clear guidance for a task and what to do, the use of negation suggests what not to do. We investigated whether negation has an influence on basic processes such as the distribution of attention across the visual field. In an experiment with pairs of colored items, instructions for “now red!” caused a large attentional weight. When the instruction was negated “not green!”, the attentional weights were more balanced. This points toward the possibility to scaffold attention to a more balanced distribution by negation and will be tested in a human-robot interaction.

Emotions play an important role in tutoring situations and affect understanding processes in different ways. Whereas in parent–child interactions, the display of ostensive affective signals such as surprise or joy can support the understanding of action explanations, in other interactions such as with decision support systems, emotions may hinder the processes of understanding and action taking. We therefore develop an interactive system that is capable of scaffolding emotional processes in order to avoid such negative effects.