Project B06: Ethics and Normativity of Explainable Artificial Intelligence
Explanations for decisions made by artificial intelligence need to be considered ethically. Otherwise, they could be used to manipulate users or create acceptance for a technology that is ethically or legally unacceptable, says subproject leader Jun.-Prof. Dr. Suzana Alpsancar. Over the course of two years, the project team will systematically classify the different purposes, needs and requirements of explanations and ethically reflect on technological development within TRR 318. They will systematically distinguish the ethical requirements from those of the users, as these must not necessarily be in line with each other. The ethical framework developed for explaining artificial intelligence will then be applied to concrete projects within TRR 318. On the one hand, the researchers aim to explain how current design decisions reconcile ethical considerations and user requirements. On the other hand, they plan to formulate concrete design recommendations for further development based on these findings. The overall aim of the project is to create a greater sensitivity to social contexts in the research of artificial intelligence. To this end, the project members will identify aspects that cannot be solved technically, but need to be addressed at a social or legal level..
Research areas: Philosophy
Publications
S. Alpsancar, in: International Conference on Computer Ethics 2023, 2023, pp. 1--17.
S. Alpsancar, in: R. Adolphi, S. Alpsancar, S. Hahn, M. Kettner (Eds.), Philosophische Digitalisierungsforschung Verantwortung, Verständigung, Vernunft, Macht, transcript, Bielefeld, 2024, pp. 55–113.
S. Alpsancar, T. Matzner, M. Philippi, in: Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT, Universidad de La Rioja, 2024, pp. 31–35.
Show all publications