Project B06: Ethics and normativity of explainable AI
Explanations for decisions made by Artificial Intelligence are not always beneficial or desirable. For this reason, the co-construction of explanations has to be considered from an ethical point of view. Otherwise, explanations could be used to manipulate users or lead to unintended harms, says project leader Prof. Dr. Suzana Alpsancar.
During the first funding period, the project team from the fields of philosophy and media studies determined that there are a variety of normative reasons for explainable artificial intelligence (XAI). In the second funding phase, the project is investigating the organizational contexts in which XAI is used and how explainable AI can convey epistemic and political virtues in these contexts.
The overall aim of the project is to create a greater sensitivity to social contexts in the research of Artificial Intelligence. To this end, the project members will identify aspects that cannot be solved technically, but need to be addressed at a social or legal level.
Research areas: Philosophy, Media studies
Support Staff
Frederik Schauerte, Paderborn University
Former Members
Dr. Martina Philippi, Research associate
Publications
Die Ambivalenz von Sichtbarkeit. Ethische Perspektiven auf die digitale Transformation
M. Philippi, in: 2025.
Civic Vice in Digital Governance
W. Reijers, in: Public Governance and Emerging Technologies, Springer Nature Switzerland, Cham, 2025.
Introduction to the Ethics of Emerging Technologies
W. Reijers, M. Thomas Young, M. Coeckelbergh, Introduction to the Ethics of Emerging Technologies, 2025.
How to Govern the Confidence Machine?
P. de Filippi, M. Mannan, W. Reijers, Regulation & Governance (2025).
Explanation needs and ethical demands: unpacking the instrumental value of XAI
S. Alpsancar, H.M. Buhl, T. Matzner, I. Scharlau, AI and Ethics 5 (2025) 3015–3033.
Explainability and AI Governance
W. Reijers, T. Matzner, S. Alpsancar, in: M. Farina, X. Yu, J. Chen (Eds.), Digital Development. Technology, Ethics and Governance, Routledge, New York, 2025.
Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen
S. Alpsancar, in: R. Adolphi, S. Alpsancar, S. Hahn, M. Kettner (Eds.), Philosophische Digitalisierungsforschung Verantwortung, Verständigung, Vernunft, Macht, transcript, Bielefeld, 2024, pp. 55–113.
AI explainability, temporality, and civic virtue
W. Reijers, T. Matzner, S. Alpsancar, M. Philippi, in: Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024., Longrono, 2024.
Unpacking the purposes of explainable AI
S. Alpsancar, T. Matzner, M. Philippi, in: Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT, Universidad de La Rioja, 2024, pp. 31–35.
Von der Kunst, die richtigen Fragen zu stellen. Das Potential der Phänomenologie für die Technikfolgenabschätzung
M. Philippi, in: TA24, 2024.
Dealing responsibly with tacit assumptions. An interdisciplinary approach to the integration of ethical reflexion into user modeling
M. Philippi, D. Mindlin, in: EASST-4S, 2024.
How to address ethical problems in a multi-perspective context: Interdisciplinary challenges of XAI
M. Philippi, in: Fpet2024, 2024.
Ethics of Explainable AI
M. Philippi, W. Reijers, in: 2024.
Interdisciplinary challenges for XAI ethics and the potential of the phenomenological approach. Gastvortrag am Eindhoven Center for the Philosophy of AI (ECPAI), TU Eindhoven, 25. Juni 2024.
M. Philippi, Interdisciplinary Challenges for XAI Ethics and the Potential of the Phenomenological Approach. Gastvortrag Am Eindhoven Center for the Philosophy of AI (ECPAI), TU Eindhoven, 25. Juni 2024., 2024.
(X)AI ethics for decision support systems. Gastvortrag am DLR Institut für den Schutz maritimer Infrastrukturen, Bremerhaven, 14. November 2024.
M. Philippi, (X)AI Ethics for Decision Support Systems. Gastvortrag Am DLR Institut Für Den Schutz Maritimer Infrastrukturen, Bremerhaven, 14. November 2024., 2024.
Herausforderungen und Potentiale von erklärbarer KI für Technikfolgenabschätzung und Politikberatung
M. Philippi, in: NTA11, 2024.
Transparency and persuasion: Chances and Risks of Explainable AI applications in modeling for policy
M. Philippi, in: SAS24, 2024.
What is AI Ethics? Ethics as means of self-regulation and the need for critical reflection
S. Alpsancar, in: International Conference on Computer Ethics 2023, 2023, pp. 1--17.
Trust and awareness in the context of search and rescue missions
M. Philippi, in: SAS23, 2023.
Show all publications