Project A03: Co-constructing explanations with emotional alignment between AI-explainer and human explainee

In Project A03, researchers are integrating the element of emotion into the explanation process by considering issues such as how emotional reactions influence the perception of an Artificial Intelligence explanation. Which decisions should humans make with the help of Artificial Intelligence in risky situations? How can explanations be aligned with the emotional state of the human user? Researchers from the fields of computer science and economics are investigating how emotions such as happiness and anxiety impact processes of comprehension and decision-making. Their goal is to develop a model for intelligent machines that recognizes emotions and takes these into account in explanatory processes.

 

Research areas: Computer science, Economics

Project leaders

Prof. Dr. Kirsten Thommes

More about the person

Staff

Olesja Lammert, M.Sc.

More about the person

Christian Schütze, M. Sc.

More about the person

Associate member

Dr.-Ing. Birte Richter, Bielefeld University

Support staff

Maryam Alizadeh, Bielefeld University

Vivien Mercedes Brückmann, Paderborn University

Robin Sjard von Collani, Bielefeld University

Eldan Huduti, Paderborn University

Ayla Luong, Bielefeld University

Tetyana Shevchuk, Paderborn University

Pub­lic­a­tions

What is Missing in XAI So Far?

U. Schmid, B. Wrede, KI - Künstliche Intelligenz 36 (2022) 303–315.


Explainable AI

U. Schmid, B. Wrede, KI - Künstliche Intelligenz 36 (2022) 207–210.


AI: Back to the Roots?

B. Wrede, KI - Künstliche Intelligenz 36 (2022) 117–120.



An Architecture Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various Robotic Systems

A. Groß, C. Schütze, B. Wrede, B. Richter, in: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ACM, 2022, pp. 154–159.


Enabling Non-Technical Domain Experts to Create Robot-Assisted Therapeutic Scenarios via Visual Programming

C. Schütze, A. Groß, B. Wrede, B. Richter, in: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ACM, 2022, pp. 166–170.


The Role of Response Time for Algorithm Aversion in Fast and Slow Thinking Tasks

A. Lebedeva, J. Kornowicz, O. Lammert, J. Papenkordt, in: Artificial Intelligence in HCI, 2023.


RISE: an open-source architecture for interdisciplinary and reproducible human–robot interaction research

A. Groß, C. Schütze, M. Brandt, B. Wrede, B. Richter, Frontiers in Robotics and AI 10 (2023).


EEG Correlates of Distractions and Hesitations in Human–Robot Interaction: A LabLinking Pilot Study

B. Richter, F. Putze, G. Ivucic, M. Brandt, C. Schütze, R. Reisenhofer, B. Wrede, T. Schultz, Multimodal Technologies and Interaction 7 (2023).


Emotional Debiasing Explanations for Decisions in HCI

C. Schütze, O. Lammert, B. Richter, K. Thommes, B. Wrede, in: Artificial Intelligence in HCI, 2023.


Humans in XAI: Increased Reliance in Decision-Making Under Uncertainty by Using Explanation Strategies

O. Lammert, B. Richter, C. Schütze, K. Thommes, B. Wrede, Frontiers in Behavioral Economics (2024).


Human Emotions in AI Explanations

K. Thommes, O. Lammert, C. Schütze, B. Richter, B. Wrede, in: Communications in Computer and Information Science, Springer Nature Switzerland, Cham, 2024.


Towards a Computational Architecture for Co-Constructive Explainable Systems

M. Booshehri, H. Buschmeier, P. Cimiano, S. Kopp, J. Kornowicz, O. Lammert, M. Matarese, D. Mindlin, A.S. Robrecht, A.-L. Vollmer, P. Wagner, B. Wrede, in: Proceedings of the 2024 Workshop on Explainability Engineering, ACM, 2024, pp. 20–25.


Static Socio-demographic and Individual Factors for Generating Explanations in XAI: Can they serve as a prior in DSS for adaptation of explanation strategies?

C. Schütze, B. Richter, O. Lammert, K. Thommes, B. Wrede, in: HAI ’24: Proceedings of the 12th International Conference on Human-Agent Interaction, ACM, 2024, pp. 141–149.


Human Emotions in AI Explanations

K. Thommes, O. Lammert, C. Schütze, B. Richter, B. Wrede, in: 2024.


Show all publications