Commentaries from TRR members on the European Union's new AI Act

The European Union has introduced a comprehensive AI law that provides for far-reaching regulation of artificial intelligence. The focus is on the quality of data for algorithm development and protection against copyright infringement. Developers must now clearly label when content has been created with AI and ensure transparency by documenting training content. Special requirements apply to critical areas such as security, with stricter controls and risk management. AI systems will also be categorised according to risk groups, with higher requirements for potentially dangerous applications. Technical documentation and digital watermarks will be mandatory, and surveillance by AI will be restricted. Biometric identification will only be allowed for specific suspects and serious crimes. This EU legislation on the regulated and responsible use of AI is the first of its kind in the world.

 

Commentary on the EU AI Act by Prof Dr. Kirsten Thommes and Prof Dr. Britta Wrede on the prohibition of AI emotion recognition in the workplace and educational institutions:

The EU’s “Artificial Intelligence Act” seeks to ban emotion recognition in the workplace and educational institutions. This new proposal differs from the previous one, which also included a ban on emotion recognition in law enforcement and migration. First, by revising the application, the AI Act deviates from the inner logic of protecting vulnerable groups (e.g. migrants) without any justification. Not only does the exception not seem to be targeted to protect vulnerable groups in society, but it also lacks justification.

The general ban on emotion recognition in workplaces and educational settings bears more problems than it solves: First, it prevents the use of general-purpose technologies. If, for instance, a general application such as ChatGPT (or similar) would utilize semantic analysis of the user's language to make emotional assessments and phrase answers accordingly, such a general-purpose technology would not be allowed to be used in the workplace or school. This regulation probably cannot be enforced, or will create many workarounds to use such technology anyway, even more uncontrollably, and shifts the risk of usage from the employer (or school system) to the employee (or student).

Second, emotion recognition may be helpful to and human-centered in some instances. In our research (project A3, TRR 318), we find that explanation strategies of an AI system may use human emotion in two ways: First, they may serve as an early feedback signal for non-understanding or unsatisfactory explanations, allowing to initiate an interactive clarification process that enables the user to guide the explanation towards those issues they are struggling with. Second, our latest research shows that human emotions such as arousal require different explanation strategies than humans in other emotional states. We show that aroused humans are easily overloaded with complex explanations, and more targeted, cascading explanations are better received. If the goal – at work or in educational settings – is to understand the topic or the AI system, then emotion recognition may be useful to achieve such a goal. Banning emotion recognition in work or in educational settings may do more harm than good, as users have emotions and human centricity must take into account the human characteristics and their heterogeneity.

Further, the EU’s “Artificial Intelligence Act” raises concerns regarding the limited reliability of emotion recognition and calls for prohibition. While we agree that emotion recognition is currently not 100% reliable, the argument is generally non-coherent: Almost all data is not 100% reliable because measurement errors are widely common, and the ground truth (e.g., of emotions, but also of other latent constructs) remains unknown. Banning emotion recognition assumes that we will be unable to improve measurement errors in emotion recognition – but not in other data. One potential avenue is, for instance, an interactive AI that can be corrected during interaction. When considering a co-constructive capability of the AI interface, emotion recognition results may become subject to negotiation and can be altered by the user, thus allowing the user to contribute actively to the explanation process.

While we agree that emotion recognition may be misused in some instances, we argue that the general ban in the workplace or in education may cause more problems than it mitigates risks. Therefore, targeted measures to protect vulnerable groups and regulation of high-risk applications, defined as "significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law", are necessary.

 

Commentary on the AI Act of the European Union by Prof. Dr. Henning Wachsmuth:

Artificial Intelligence (AI) is becoming increasingly involved in processes and decisions that affect human life. AI has the potential to enhance our lives in many ways, but it also poses new risks to our safety and freedom. Rules for a responsible development and use of AI are thus an important and awaited effort by international legislative bodies, such as the European Parliament. Several aspects of the Artificial Intelligence Act reflect key values of the European society, including the ban of AI for social scoring and for the manipulation of human behavior. Other aspects aim to safeguard our fundamental rights for a fair and equal treatment, such as the right to an explanation of AI decisions.

It is of upmost importance that we, as a society, do not lose control over the norms and values we believe in as AI becomes more powerful. At the same time, the rules installed should not hinder innovative AI research and development across institutions in Europe in order to prevent an increasing dependence on big tech companies. This will help us avoid becoming more vulnerable to global players who show no interest in complying with the rules. I appreciate in this regard that the European Parliament limits certain restrictions to high-risk systems and explicitly aims to support small and mid-sized companies. However, it remains to be seen how the actual operationalization of the defined rules will look like as well as to what extent it can address the challenges that AI poses to our lives. The rules are thus a first valuable endeavor towards the responsible regulation of AI, but they will need further refinement and reassessment in the future. 

Prof. Dr. Kirsten Thommes (Paderborn University), project leader of subprojects A03 and C02
Prof. Dr. Britta Wrede (Bielefeld University), project leader of subprojects A03, A05 and Ö
Prof. Dr. Henning Wachsmuth (Hannover University), project leader of subprojects C04 and INF.