Project A02: Monitoring the understanding of explanations

When something is being explained to someone, the explainee signals their understanding – or lack thereof – to the explainer with verbal expressions and other non-verbal means of communication, such as gestures and facial expressions. By nodding, the explainee can signal that they have understood. Nodding, however, can also be meant as a request to continue with the explanation. This has to be determined from the context of the conversation. In Project A02, linguists and computational linguists are investigating how people (and later, artificial agents) recognize that the person they’re explaining something to is understanding – or not. For this, the research team will be looking at 80 dialogues in which one person explains a social game to another, examining these for communicative feedback signals that indicate varying degrees of comprehension in the process of understanding. The findings from these analyses will be incorporated into an intelligent system that will be able to detect feedback signals such as head nods and interpret them in terms of signaled level of understanding.
Research areas: Computer science, Linguistics
Support Staff
Nico Dallmann - Bielefeld University
Sonja Friedla - Paderborn University
Jule Kötterheinrich - Paderborn University
Daniel Mohr - Paderborn University
Celina Nitschke, Paderborn University
Göksun Beeren Usta - Bielefeld University
Posters
Conference-poster presented at the 10th Conference of the International Society for Gesture Studies with the title “Different explanation topics, different gestural dimensions?” by Stefan Lazarov.
Conference-Poster presented at CogSci 2024 of the publication "Variations in explainer's gesture deixis in explanations related to the monitoring of explainees' understanding" by Stefan Lazarov and Angela Grimminger.

Conference-Poster presented at Symposium Series on Multimodal Communication 2023 with the title "An Unsupervised Method for Head Movement Detection" by Yu Wang and Hendrik Buschmeier.
Conference-Poster presented at Symposium Series on Multimodal Communication 2023 with the title "The relation between multimodal behaviour and elaborartions in exlpanations" by Stefan Lazarov and Angela Grimminger.
Publications
Applications of video-recall for the assessment of understanding and knowledge in explanatory contexts
S.T. Lazarov, M. Schaffer, V. Gladow, H. Buschmeier, A. Grimminger, H.M. Buhl, (n.d.).
Acoustic detection of false positive backchannels of understanding in explanations
O. Türk, S.T. Lazarov, H. Buschmeier, P. Wagner, A. Grimminger, in: LingCologne 2025 – Book of Abstracts, 2025, p. 36.
A BFO-based ontological analysis of entities in Social XAI
M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 15th International Conference on Formal Ontology in Information Systems, IOS Press, 2025, pp. 255–268.
A BFO-based ontology of context for Social XAI
M. Booshehri, H. Buschmeier, P. Cimiano, in: Abstracts of the 3rd TRR 318 Conference: Contextualizing Explanations, 2025.
SemDial 2025 – Bialogue. Proceedings of the 29th Workshop on the Semantics and Pragmatics of Dialogue
N. Ilinykh, A. Robrecht, S. Kopp, H. Buschmeier, eds., SemDial 2025 – Bialogue. Proceedings of the 29th Workshop on the Semantics and Pragmatics of Dialogue, Bielefeld, Germany, 2025.
Variations in explainers’ gesture deixis in explanations related to the monitoring of explainees’ understanding
S.T. Lazarov, A. Grimminger, in: Proceedings of the Annual Meeting of the Cognitive Science Society, 2024.
Towards a Computational Architecture for Co-Constructive Explainable Systems
H. Buschmeier, P. Cimiano, S. Kopp, J. Kornowicz, O. Lammert, M. Matarese, D. Mindlin, A.S. Robrecht, A.-L. Vollmer, P. Wagner, B. Wrede, M. Booshehri, in: Proceedings of the 2024 Workshop on Explainability Engineering, ACM, 2024, pp. 20–25.
Changes in the topical structure of explanations are related to explainees’ multimodal behaviour
S.T. Lazarov, K. Biermeier, A. Grimminger, Interaction Studies 25 (2024) 257–280.
Explain with, rather than explain to: How explainees shape their own learning
J.B. Fisher, K.J. Rohlfing, E. Donnellan, A. Grimminger, Y. Gu, G. Vigliocco, Interaction Studies 25 (2024) 244–255.
Predictability of understanding in explanatory interactions based on multimodal cues
O. Türk, S.T. Lazarov, Y. Wang, H. Buschmeier, A. Grimminger, P. Wagner, in: Proceedings of the 26th ACM International Conference on Multimodal Interaction, San José, Costa Rica, 2024, pp. 449–458.
The illusion of competence: Evaluating the effect of explanations on users’ mental models of visual question answering systems
J. Sieker, S. Junker, R. Utescher, N. Attari, H. Wersing, H. Buschmeier, S. Zarrieß, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, ACL, Miami, FL, USA, 2024, pp. 19459–19475.
Towards a BFO-based ontology of understanding in explanatory interactions
M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 4th International Workshop on Data Meets Applied Ontologies in Explainable AI (DAO-XAI), International Association for Ontology and its Applications, Santiago de Compostela, Spain, 2024.
Detecting subtle differences between human and model languages using spectrum of relative likelihood
Y. Xu, Y. Wang, H. An, Z. Liu, Y. Li, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, ACL, Miami, FL, USA, 2024, pp. 10108–10121.
A model of factors contributing to the success of dialogical explanations
M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 26th ACM International Conference on Multimodal Interaction, ACM, San José, Costa Rica, 2024, pp. 373–381.
Conversational feedback in scripted versus spontaneous dialogues: A comparative analysis
I. Pilán, L. Prévot, H. Buschmeier, P. Lison, in: Proceedings of the 25th Meeting of the Special Interest Group on Discourse and Dialogue, Kyoto, Japan, 2024, pp. 440–457.
Turn-taking dynamics across different phases of explanatory dialogues
P. Wagner, M. Włodarczak, H. Buschmeier, O. Türk, E. Gilmartin, in: Proceedings of the 28th Workshop on the Semantics and Pragmatics of Dialogue, Trento, Italy, 2024, pp. 6–14.
Can AI explain AI? Interactive co-construction of explanations among human and artificial agents
N. Klowait, M. Erofeeva, M. Lenke, I. Horwath, H. Buschmeier, Discourse & Communication 18 (2024) 917–930.
Revisiting the phenomenon of syntactic complexity convergence on German dialogue data
Y. Wang, H. Buschmeier, in: Proceedings of the 20th Conference on Natural Language Processing (KONVENS 2024), Vienna, Austria, 2024, pp. 75–80.
How much does nonverbal communication conform to entropy rate constancy?: A case study on listener gaze in interaction
Y. Wang, Y. Xu, G. Skantze, H. Buschmeier, in: Findings of the Association for Computational Linguistics ACL 2024, Bangkok, Thailand, 2024, pp. 3533–3545.
Automatic reconstruction of dialogue participants’ coordinating gaze behavior from multiple camera perspectives
A.N. Riechmann, H. Buschmeier, in: Book of Abstracts of the 2nd International Multimodal Communication Symposium, Frankfurt am Main, Germany, 2024, pp. 38–39.
Forms of Understanding for XAI-Explanations
H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher, A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing, I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede, (n.d.).
Does listener gaze in face-to-face interaction follow the Entropy Rate Constancy principle: An empirical study
Y. Wang, H. Buschmeier, in: Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 2023, pp. 15372–15379.
Modeling Feedback in Interaction With Conversational Agents—A Review
A. Axelsson, H. Buschmeier, G. Skantze, Frontiers in Computer Science 4 (2022).
Show all publications