Project B03: Exploring users, roles, and explanations in real-world contexts

Project B03 investigates how lay users make sense of Artificial Intelligence (AI) in their everyday lives. Drawing on qualitative data from 63 participants in Germany and the UK, the project explores everyday experiences with AI technologies through surveys, digital diaries, interviews, and group discussions. The research highlights how individual context and prior knowledge shape when and why people feel explanations of AI are necessary. Findings show that users often rely on non-technical knowledge to interpret AI, and that political and social considerations frequently outweigh technical concerns. By examining these dynamics, the project argues that explainable AI (XAI) research must move beyond purely technical solutions to better understand the social processes through which people interpret and evaluate AI systems.

Research areas: Sociology, Media studies

Project leaders

Prof. Dr. Tobias Matzner

More about the person

Staff

Dr Patricia Jimenez

More about the person

Patrick Henschen, M.A.

More about the person

Support staff

Jennifer Dumpich, Paderborn University

Jessica Stanley-Dilley, Paderborn University

Former Members

Prof. Dr Ilona Horwath, Project leader

Dr. Christian Schulz, Research associate

Leonie von Egloffstein, Paderborn University

Esma Gökal, Paderborn University

Merle Mahne, Paderborn University

Lea Biere, Paderborn University

Pub­lic­a­tions

Structures Underlying Explanations

P. Jimenez, A.L. Vollmer, H. Wachsmuth, in: K. Rohlfing, K. Främling, B. Lim, S. Alpsancar, K. Thommes (Eds.), Social Explainable AI: Communications of NII Shonan Meetings, Springer Singapore, n.d.


Healthy Distrust in AI systems

B. Paaßen, S. Alpsancar, T. Matzner, I. Scharlau, ArXiv (2025).


Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

L. Fichtel, M. Spliethöver, E. Hüllermeier, P. Jimenez, N. Klowait, S. Kopp, A.-C. Ngonga Ngomo, A. Robrecht, I. Scharlau, L. Terfloth, A.-L. Vollmer, H. Wachsmuth, ArXiv:2504.18483 (2025).


Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

L. Fichtel, M. Spliethöver, E. Hüllermeier, P. Jimenez, N. Klowait, S. Kopp, A.-C. Ngonga Ngomo, A. Robrecht, I. Scharlau, L. Terfloth, A.-L. Vollmer, H. Wachsmuth, in: Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Association for Computational Linguistics, Avignon, France, n.d.


A new algorithmic imaginary

C. Schulz, Media, Culture & Society 45 (2023) 646–655.


Vernacular Metaphors of AI

C. Schulz, A. Wilmes , in: ICA Preconference Workshop “History of Digital Metaphors”, University of Toronto, May 25 , n.d.


(De)Coding social practice in the field of XAI: Towards a co-constructive framework of explanations and understanding between lay users and algorithmic systems

J. Finke, I. Horwath, T. Matzner, C. Schulz, in: Artificial Intelligence in HCI, Springer International Publishing , Cham, 2022, pp. 149–160.


(De)Coding Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems

J. Finke, I. Horwath, T. Matzner, C. Schulz, in: Artificial Intelligence in HCI, Springer International Publishing, Cham, 2022, pp. 149–160.


Show all publications