First Dis­pu­ta­tions by TRR 318 Doc­tor­al Can­did­ates

Topics range from economics and computer science to sociology and psycholinguistics. The first TRR 318 doctoral candidates have successfully completed their dissertations and provide insights into their research.

Dr. Jaroslaw Kornowicz, Project C02, Economics

In his thesis, “Human Integration in AI: Calibrating Trust and Improving Performance in Decision Support Systems,” Dr. Jaroslaw Kornowicz explored ways to improve collaboration between humans and artificial intelligence. He was particularly interested in how humanizing AI, integrating expert knowledge, and co-constructive approaches influence the trustworthiness and accuracy of AI-supported decision-making aids.

Key finding:
Co-constructive, human-centered AI systems increase performance and trust simultaneously, but require further empirical validation.

Significance for TRR 318:
Since co-construction improves explainability and decision quality, TRR 318 requires close interdisciplinary collaboration and human-centered studies to research such systems holistically.

Impact on (AI) practice:
The targeted “humanization” of AI and the active involvement of users and experts increases acceptance in the field and demonstrably improves the accuracy of results.


Dr. Fabian Fumagalli, Project C03, Computer Science

In his dissertation, Dr. Fabian Fumagalli examined the mathematical foundations of feature-based explanation methods. He developed a framework that structures and describes the differences and interpretations of various explanation methods. Dr. Fumagalli also investigated how features influence each other and how explanations can be adapted when the model or data changes over time.

Key finding:
Explanation methods differ considerably in practice, so a sound mathematical foundation is essential.

Significance for TRR 318:
The mathematical-statistical classification of explanation methods is necessary for co-constructing explanations with users.

Impact on AI practice:
A deeper understanding of the differences in and interpretations of explanation methods helps one to more clearly understand models and their decisions.


Dr. Olesja Lammert, Project A03, Economics

In her dissertation, “Human Factors in XAI: Enhancing User Reliance Through Emotional Alignment in Decision-Making,” Dr. Olesja Lammert explored how considering emotions in AI-supported decision-making affects trust and user reliance. User reliance is defined as the extent to which users depend on the system's recommendations. Her work aimed to integrate emotional factors into explainable AI systems to promote rational decision-making and avoid emotional misinterpretations of AI explanations.

Key finding:
Explanations in AI-supported decision-making situations are influenced by emotions and cognitive processes. While some explanation strategies can strengthen user trust, too much transparency often leads to overwhelm and rejection. Therefore, emotional and cognitive factors are central to human-centered research on explainable AI.

Significance for TRR 318:
Research into the interplay of emotion and cognition in AI-supported decision-making situations helps to improve understanding of explainability. Interdisciplinary approaches, field studies in emotionally stressful contexts, and long-term studies on XAI, emotions, understanding, and trust can contribute significantly to this endeavor.

Impact on AI practice:
XAI systems can be particularly helpful when designed to be adaptive and interactive. They should adapt their explanations to the situation at hand and the emotions of the users to avoid cognitive overload. Thus, they support individuals in complex decision-making situations, promoting well-reasoned decisions.


Dr. Stefan Lazarov, Project A02, Psycholinguistics

In his dissertation, “The Reflection of Interactional Monitoring in the Dynamics of Verbal and Nonverbal Explaining Behavior,” Dr. Stefan Lazarov investigated how people adapt their explanatory behavior to their counterparts.

Key finding:
People structure their explanations based on brief multimodal feedback, such as eye and head movements. The way explainers use gestures depends heavily on how well listeners can follow and on whether the object being explained is physically present.

Significance for TRR 318:
The work emphasizes the importance of observation processes in explanations and clarifies multimodal, adaptive explanatory behavior.

Impact on AI practice:
For AI systems to explain successfully, they must learn to perceive and correctly interpret multimodal feedback.


Dr. Nils Klowait, Project Ö, Sociology

In his dissertation, Dr. Nils Klowait examined the influence of interactive technologies, such as ChatGPT, on social interactions, as well as how people use their multimodal actions to shape, resist, or transform this influence.

Key finding:
“Context” is not a fixed property of the environment, but rather, it is something that participants actively use and make relevant in interaction. This becomes apparent when analyzing how people interact with non-human conversation partners, such as ChatGPT.

Significance for TRR 318:
The focus is on the process. It is not only how a system is built that is important, but also how its elements gain meaning and usability for those involved in real-life interactions.

Impact on AI practice:
Successful AI implementation requires careful, human-centered system design and detailed empirical studies on how AI is used in everyday life.


Dr. Josephine Beryl Fisher, Project A01, Psycholinguistics

In her dissertation, “Adaptive Explanations: The Involvement of Explainees,” Dr. Josephine Beryl Fisher explored how conversation partners verbally adapt to each other during explanations. She was particularly interested in how people actively participate in the explanation process when something is explained to them. Dr. Fisher analyzed verbal behavior with regard to explanation strategies and content elements.

Key finding:
People to whom something is explained actively participate in explanations through their substantial verbal behavior. The less they participate, the fewer topics the explainers address.

Significance for TRR 318:
The co-construction of explanations is driven by the content contributions of the recipient of the explanation. 

Impact on AI practice:
XAI systems should enable users to actively and flexibly participate in the explanation process.

 

From left: Dr. Nils Klowait, Dr. Josephine Beryl Fisher, Dr. Fabian Fumagalli, Dr. Olesja Lammert, Dr. Jaroslaw Kornowicz, Dr. Stefan Lazarov