In their latest study, Tobias Peters and Professor Dr. Ingrid Scharlau of Project C01 examined how people deal with incorrect recommendations from AI systems and whether deliberately fostering distrust can improve users’ performance when using these systems. The key finding: calling for skepticism does not improve performance; instead, it tends to worsen it. The study, titled Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications, was published in the journal Frontiers in Psychology.
In two experimental scenarios on image classification, study participants received advice from AI, the quality of which was deliberately decreased over the course of the experiment. The study participants were asked to decide whether geometric shapes could be classified into certain categories and whether images were real or AI-generated. During the experiment, participants were regularly asked to what degree they trusted or distrusted the AI advice. This allowed the psychologists to measure the extent to which the study participants were influenced by incorrect AI advice.
The researchers were looking specifically at whether an explicit call for skepticism—i.e., being prompted to critically question every piece of AI advice—improved task performance compared to a neutral condition. As Tobias Peters explains, “When interacting with AI that makes mistakes, our instruction to be skeptical surprisingly did not help. This means that our instruction had hardly any impact on the use of advice given by AI.”
In addition to this experimental study, Tobias Peters and his colleague Kai Biermeier developed a Bayesian analysis based on signal detection theory. This accounts for uncertainties in the data and measures how well the study participants were able to distinguish between correct and incorrect AI advice. One thing became quite apparent: study participants did notice the increasing errors made by the AI system and reacted to them. “As the quality of the AI advice deteriorated, the participants trusted the AI less and less,” says Peters. “Even when the AI’s performance later improved, participants’ confidence in it did not return to the original level.”
This methodological approach will enable future studies to further examine trust and distrust when interacting with AI in a differentiated manner. Tobias Peters summarizes the implications for future research: “Our findings offer important insights for the current discourse on how to deal with error-prone AI systems and distrust toward them–especially with regard to warnings, or disclaimers, before using LLM-based chatbots such as ChatGPT.”
To the publication:
Peters TM and Scharlau I (2025) Interacting with fallible AI: is distrust helpful when receiving AI misclassifications? Front. Psychol. 16:1574809. doi: 10.3389/fpsyg.2025.1574809