Project B05: Co-constructing explainability with an interactively learning robot

Project B05 explores how co-constructive training with a robot can be designed to adapt to the users and their understanding. At the center of this project is a double loop of learning and understanding: the robot continuously adjusts its behavior while the users gain insights into its learning mechanisms. The goal is that by the end of the learning process, everyone will be able to train a robot. The focus is on applications in the healthcare sector, where even laypersons with low technical affinity can interact with robots.

To achieve this goal, an explanatory model will first be developed, from which various explanatory elements will be derived and technically implemented. The behavior of the users will be examined in an online study. Based on the results, the AI system will learn which explanatory elements should be shown to the users at which point in time. During the training, the users will be closely monitored and scaffolded. At the same time, they will adjust their inputs and actions by observing the robot's reactions. In this way, the users learn how to train the robot more effectively.

 

Research areas: Computer science, Computer science education

Project Leaders

Prof. Dr.-Ing. Anna-Lisa Vollmer

More about the person

Prof. Dr. Carsten Schulte

More about the person

Staff

Helen Beierling, M. Sc.

More about the person

Patrick Schüren, M.Ed.

More about the person

Support Staff

Anna-Lena Rinke, Bielefeld University

Arthur Maximilian Noller, Bielefeld University

Mathis Tibbe, Bielefeld University

Pub­lic­a­tions

Components of an explanation for co-constructive sXAI

A.-L. Vollmer, H.M. Buhl, R. Alami, K. Främling, A. Grimminger, M. Booshehri, A.-C. Ngonga Ngomo, in: K.J. Rohlfing, K. Främling, S. Alpsancar, K. Thommes, B.Y. Lim (Eds.), Social Explainable AI, Springer, n.d.


Practices: How to establish an explaining practice

K.J. Rohlfing, A.-L. Vollmer, A. Grimminger, in: K. Rohlfing, K. Främling, K. Thommes, S. Alpsancar, B.Y. Lim (Eds.), Social Explainable AI, Springer, n.d.


The power of combined modalities in interactive robot learning

H. Beierling, R. Beierling, A.-L. Vollmer, Frontiers in Robotics and AI 12 (2025).


Human-Interactive Robot Learning: Definition, Challenges, and Recommendations

K. Baraka, I. Idrees, T.K. Faulkner, E. Biyik, S. Booth, M. Chetouani, D.H. Grollman, A. Saran, E. Senft, S. Tulli, A.-L. Vollmer, A. Andriella, H. Beierling, T. Horter, J. Kober, I. Sheidlower, M.E. Taylor, S. van Waveren, X. Xiao, Transactions on Human-Robot Interaction (n.d.).


Forms of Understanding for XAI-Explanations

H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher, A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing, I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede, Cognitive Systems Research 94 (2025).


What you need to know about a learning robot: Identifying the enabling architecture of complex systems

H. Beierling, P. Richter, M. Brandt, L. Terfloth, C. Schulte, H. Wersing, A.-L. Vollmer, Cognitive Systems Research 88 (2024).


Advancing Human-Robot Collaboration: The Impact of Flexible Input Mechanisms

H. Beierling, K. Loos, R. Helmert, A.-L. Vollmer, Advancing Human-Robot Collaboration: The Impact of Flexible Input Mechanisms, Proc. Mech. Mapping Hum. Input Robots Robot Learn. Shared Control/Autonomy-Workshop RSS, 2024.


Technical Transparency for Robot Navigation Through AR Visualizations

L. Dyck, H. Beierling, R. Helmert, A.-L. Vollmer, in: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2023, pp. 720–724.


Forms of Understanding of XAI-Explanations

H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J. Fisher, A. Groß, I. Horwath, N. Klowait, S. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing, I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede, ArXiv:2311.08760 (2023).


Show all publications