Project B06: Ethics and Normativity of Explainable Artificial Intelligence

Explanations for decisions made by artificial intelligence need to be considered ethically. Otherwise, they could be used to manipulate users or create acceptance for a technology that is ethically or legally unacceptable, says subproject leader Jun.-Prof. Dr. Suzana Alpsancar. Over the course of two years, the project team will systematically classify the different purposes, needs and requirements of explanations and ethically reflect on technological development within TRR 318. They will systematically distinguish the ethical requirements from those of the users, as these must not necessarily be in line with each other. The ethical framework developed for explaining artificial intelligence will then be applied to concrete projects within TRR 318. On the one hand, the researchers aim to explain how current design decisions reconcile ethical considerations and user requirements. On the other hand, they plan to formulate concrete design recommendations for further development based on these findings. The overall aim of the project is to create a greater sensitivity to social contexts in the research of artificial intelligence. To this end, the project members will identify aspects that cannot be solved technically, but need to be addressed at a social or legal level..

 

Research areas: Philosophy

Project leaders

Jun. Prof. Dr. Suzana Alpsancar

Transregional Collaborative Research Centre 318

More about the person

Prof. Dr. Tobias Matzner

Transregional Collaborative Research Centre 318

More about the person

Staff

Dr. Martina Philippi

Transregional Collaborative Research Centre 318

More about the person

Dr. Wessel Reijers

Transregional Collaborative Research Centre 318

More about the person

Publications

What is AI Ethics? Ethics as means of self-regulation and the need for critical reflection
S. Alpsancar, in: International Conference on Computer Ethics 2023, 2023, pp. 1--17.
Show all publications