Ethics and Normativity of Explainable Artificial Intelligence (Project B06)
Explanations for decisions made by artificial intelligence need to be considered ethically. Otherwise, they could be used to manipulate users or create acceptance for a technology that is ethically or legally unacceptable, says subproject leader Jun.-Prof. Dr. Suzana Alpsancar. Over the course of two years, the project team will systematically classify the different purposes, needs and requirements of explanations and ethically reflect on technological development within TRR 318. They will systematically distinguish the ethical requirements from those of the users, as these must not necessarily be in line with each other. The ethical framework developed for explaining artificial intelligence will then be applied to concrete projects within TRR 318. On the one hand, the researchers aim to explain how current design decisions reconcile ethical considerations and user requirements. On the other hand, they plan to formulate concrete design recommendations for further development based on these findings. The overall aim of the project is to create a greater sensitivity to social contexts in the research of artificial intelligence. To this end, the project members will identify aspects that cannot be solved technically, but need to be addressed at a social or legal level.
Project leaders
Jun. Prof. Dr. Suzana Alpsancar, Paderborn University
Prof. Dr. Tobias Matzner, Paderborn University