Dis­pu­ta­tions by TRR 318 doc­tor­al can­did­ates – Epis­ode 2

At the end of the first funding phase of TRR 318, more doctoral candidates have successfully completed their dissertations. In the second episode of the doctoral presentations, researchers from computer science and psychology present their research results.



Dr. André Groß, Project A05, Computer Science

In his dissertation, Dr. André Groß investigated how adaptive verbal scaffolding with negation-based explanations improves understanding and performance in human-robot tasks. To this end, he developed models and robot systems that adapt their explanations to the attention and performance of users.

Key finding:
Adaptive, negation-based explanations can improve understanding and task performance in human-robot interactions when tailored to the cognitive state of users.

Significance for TRR 318:
The dissertation developed a concept for adaptive explanations in interactive AI systems. Negation and a computational model are used to capture users' attention and understanding and use it to adapt explanations.

Impact on (AI) practice:
The work shows that adaptive, negation-based explanations can make intelligent systems more understandable and user-friendly.



Dr. Tobias Peters, Project C01, Psychology

In his doctoral thesis, Dr. Tobias Peters examined healthy skepticism—a critical, wait-and-see attitude that can be justified—in the context of AI. In addition to providing a fundamental conceptual clarification of healthy skepticism and related concepts, he investigated how healthy skepticism can be measured and to what extent it can be promoted in human-AI interactions.

Key finding:
The typical conceptualization of trust in the context of AI has often fallen short. In addition, empirical results call into question whether a disclaimer prior to interaction has any effect at all.

Significance for TRR 318:
Healthy skepticism is closely related to one of the goals of TRR, namely to enable people to assess the quality of a system's results and to help shape the algorithm. In addition, trust and skepticism develop dynamically, opening up an exciting perspective on human-AI interactions in TRR.

Impact on (AI) practice:
The work suggests paying attention not only to promoting trust, but also to promoting healthy skepticism. In addition, the results of a series of empirical studies question the usefulness of so-called disclaimers, such as those found in ChatGPT.



Dr. Amelie Robrecht, Project A01, Computer Science

When people explain something to each other, they have an image of the other person that is adjusted based on behavior and feedback—a so-called partner model. In her doctoral thesis, Dr. Amelie Robrecht investigated what happens when an artificial explainer also builds and uses such a model to adapt its explanations to users. She was able to show that this adaptation has a positive effect on users' understanding of explanations of the board game “Quarto!”.

Key finding:
Current developments in AI, especially in the field of generative AI and large language models, require interdisciplinary teams to be able to capture and illuminate their effects at different levels.

Significance for TRR 318:
The doctoral thesis supports the thesis that an optimal explanation is individual.

Impact on (AI) practice:
AI systems should be able to develop explanations in dialogue with users. This has many implications for the development of new systems and also brings new challenges.

 

From left: Dr. André Groß, Dr. Amelie Robrecht, Dr. Tobias Peters

More on the topic

08.10.2025

First Dis­pu­ta­tions by TRR 318 Doc­tor­al Can­did­ates

Read more