As­sess­ing Men­tal Health with AI - Chal­lenges in Af­fect­ive Com­put­ing

Affective computing deals with the computer-aided recognition of human emotions. In TRR 318, the team from project A06, "Co-Constructing social signs of understanding to adapt monitoring to diversity," is researching in this area. Dr. David Johnson, Jonas Paletschek, and Prof Dr. Hanna Drimalla, in collaboration with Olya Hakobyan, have now published a paper in the journal "IEEE Transactions on Affective Computing" about the relevance of Explainable Artificial Intelligence (XAI) for affective computing.

TRR 318: Why is XAI gaining relevance for affective computing?

David Johnson: AI systems that can recognize emotions are now highly regulated under the EU AI Act and have strict requirements for transparency and interpretability. Furthermore, as affective computing deals with sensitive applications, in areas like education and healthcare, understanding the model predictions is crucial to ensure fair and accurate decision-making. For example, these methods could enable AI-assisted mental health assessment, but practitioners need to be sure that the diagnosis recommendations are accurate for all groups they are treating.

Why is there still a lack of research results here?

Although the research area is gaining interest, there is limited variability in the types of explanations that are being generated. Furthermore, evaluation of the methods is lacking, meaning that researchers and designers have no clear information on which types of explanations work best in affective computing tasks.

What are the next steps for advancing research in this area?

Different explanation types (beyond feature-based methods) need to be applied and evaluated in realistic affective computing contexts, such as AI-assisted mental health assessment. In this way, researchers can better identify which types of methods work best in real-world situations. Additionally, researchers should move more toward explanations for multimodal methods, including more research in the audio domain.

The new publication, "Explainable AI for Audio and Visual Affective Computing: A Scoping Review", explores the use of XAI in audiovisual affective computing. This field leverages facial expressions and vocal pattern data to recognize emotional states. The authors highlight a lack of research on the interpretability of these models and provide recommendations to enhance transparency and explainability in future affective computing applications.

These findings align closely with the EU's new AI Act requirements, emphasizing transparency and documentation in AI systems. In sensitive fields like affective computing ensuring the explainability of model decisions is crucial.

This is a portrait foto of Dr. David Johnson.
Dr David Johnson, lead author of the A06 publication "Explainable AI for Audio and Visual Affective Computing: A Scoping Review".

More on the topic

16.04.2024

New re­search group on ex­plain­able ar­ti­fi­cial in­tel­li­gence and its im­pact on hu­man trust

Read more