The annual World Conference on Explainable Artificial Intelligence (XAI) brings together experts from various fields of research to share and discuss knowledge, experiences and innovations in the field of Explainable Artificial Intelligence. As last year, many researchers from TRR 318 will be present at the XAI Conference this summer to present their latest publications.
Researchers from projects A03, A04 and C05 will present their research results in the field of Explainable Artificial Intelligence and have the opportunity to exchange ideas with international colleagues.
Project A03 will present a paper that investigates how different XAI strategies are perceived depending on the emotional arousal of the user. The study "Human Emotions in AI Explanations" shows that people in a highly aroused state respond best to reasoned explanations. "We are looking forward to presenting our paper at the XAI conference and exchanging ideas with other scientists about the importance of considering dynamic factors such as emotions in XAI," says Olesja Lammert.
Michael Erol Schaffer from the A04 project will present how explainers perceive and consider the knowledge and interests of the addressees. The study "Perception and Consideration of the Explainers' Needs for Satisfying Explanations" revealed that explanations should address both observable characteristics and interpretable aspects. "Above all, I hope that attending the conference will provide me with good networking opportunities and suggestions on how the results of my research could actually be implemented in XAI," explains Schaffer.
"Interesting presentations and discussions as well as new collaborations and constructive feedback on my presented work," are the aspects that Felix Liedeker from the C05 project hopes to gain from his participation in the conference. Liedeker will be presenting the paper "An Empirical Investigation of Users' Assessment of XAI Explanations: Identifying the Sweet-Spot of Explanation Complexity". The paper analyses how different types of explanations (simple and complex counterfactual explanations) are perceived by users in important decision-making situations.