Project C01: Healthy distrust in explanations

The focus of Project C01 is on crucial overarching properties of decisions and explanations. The aim is to investigate the important question of how a person’s critical attitude towards an AI system can be supported by fostering a healthy distrust in intelligent systems, and whether and how this attitude can be reinforced by means of explainable machine learning methods. This attitude is crucial for persons to act mindfully and be empowered to shape AI. The project benefits from interdisciplinary expertise in experimental studies and formal modeling as well as from being able to use cases from the domain of machine learning.
Research areas: Computer science, Psychology
Support staff
Valeska Behr, Paderborn University
Oliver Debernitz, Paderborn University
Konstantin Gubenko, Paderborn University
Publications
Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation
R. Visser, T.M. Peters, I. Scharlau, B. Hammer, Cognitive Systems Research (2025).
Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications?
T.M. Peters, I. Scharlau, Frontiers in Psychology 16 (2025).
Assessing healthy distrust in human-AI interaction: Interpreting changes in visual attention.
T.M. Peters, K. Biermeier, I. Scharlau, (2025).
Explaining Outliers using Isolation Forest and Shapley Interactions
R. Visser, F. Fumagalli, E. Hüllermeier, B. Hammer, in: Proceedings of the European Symposium on Artificial Neural Networks (ESANN), 2025.
The Importance of Distrust in AI
T.M. Peters, R.W. Visser, in: Communications in Computer and Information Science, Springer Nature Switzerland, Cham, 2023.
“I do not know! but why?” — Local model-agnostic example-based explanations of reject
A. Artelt, R. Visser, B. Hammer, Neurocomputing 558 (2023).
Explaining Reject Options of Learning Vector Quantization Classifiers
A. Artelt, J. Brinkrolf, R. Visser, B. Hammer, in: Proceedings of the 14th International Joint Conference on Computational Intelligence, SCITEPRESS - Science and Technology Publications, 2022.
Model Agnostic Local Explanations of Reject
A. Artelt, R. Visser, B. Hammer, in: ESANN 2022 Proceedings, Ciaco - i6doc.com, 2022.
Show all publications