Co-constructing explainability with an interactively learning robot (Project B05)

In Project B05, researchers from the field of computer science are exploring non-verbal explanations between humans and machines. A robot is tasked to learn an action by interacting with a human, such as a specific movement. Misunderstandings can arise during this process because human users often do not know how robots acquire skills – is the robot’s direction of gaze important, or are there other factors that influence machine learning? 

Researchers on this project are investigating study participants’ perceptions of how a robot works and are developing visualizations that can be used to improve users' understanding of the robot. In addition to this, the researchers are analyzing how gender, age, and prior knowledge can impact interactions with the robot, and how explanatory strategies can change while interacting with the robot. The findings of this project will situate the concept of explainability in a social context.

Publications

Dyck, L., Beierling, H., Helmert, R., and Vollmer, A. (2023) Technical Transparency for Robot Navigation Through AR Visualizations. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI '23). Association for Computing Machinery, New York, NY, USA, 720–724. https://doi.org/10.1145/3568294.3580181

 

In this video the project leaders present their view of co-construction.

"Explainability in a human-AI teaching scenario" with B05 scientific staff member Helen Beierling

Research areas

computer science

Project leaders

Prof. Dr.-Ing. Anna-Lisa Vollmer, Bielefeld University

Staff

Helen Beierling, Bielefeld University

Support Staff

Leonie Dyck, Bielefeld University

Arthur Maximilian Noller, Bielefeld University

Mathis Tibbe, Bielefeld University