Project INF: Toward a framework for assessing explanation quality

Complementarily to subprojects across research areas A, B, and C, the members of Project INF investigate how the quality and success of an explanation can be measured. For this purpose, the computer scientists will first analyze natural language explanations, such as written records from other projects or explanatory dialogues from internet forums. Based on the findings from this analysis, they will then develop tools and criteria that can be applied to explanations. In addition, the project members will provide TRR researchers with a glossary of key concepts. The aim of this project is to establish an understanding of what makes an explanation successful. With these findings, the researchers will contribute to a comprehensive theory of the co-construction of explanations as elaborated in the TRR.
Research areas: Computer science
Support staff
Akshit Bhatia, Paderborn University
Maryam Bahraminejad, Bielefeld University
Alexander Espig, Leibniz University Hannover
Simona Ignatova, Bielefeld University
Felix Lange, Paderborn University
Maryam Nobakht, Bielefeld University
Yaxi Wang, Leibniz University Hannover
Publikationen
A BFO-based ontological analysis of entities in Social XAI
M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 15th International Conference on Formal Ontology in Information Systems, IOS Press, 2025, pp. 255–268.
A BFO-based ontology of context for Social XAI
M. Booshehri, H. Buschmeier, P. Cimiano, in: Abstracts of the 3rd TRR 318 Conference: Contextualizing Explanations, 2025.
Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
L. Fichtel, M. Spliethöver, E. Hüllermeier, P. Jimenez, N. Klowait, S. Kopp, A.-C. Ngonga Ngomo, A. Robrecht, I. Scharlau, L. Terfloth, A.-L. Vollmer, H. Wachsmuth, in: Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Association for Computational Linguistics, Avignon, France, n.d.
Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection
M. Spliethöver, T. Knebler, F. Fumagalli, M. Muschalik, B. Hammer, E. Hüllermeier, H. Wachsmuth, in: L. Chiruzzo, A. Ritter, L. Wang (Eds.), Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Association for Computational Linguistics, Albuquerque, New Mexico, 2025, pp. 2421–2449.
Modeling the Quality of Dialogical Explanations
M. Alshomary, F. Lange, M. Booshehri, M. Sengupta, P. Cimiano, H. Wachsmuth, in: N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ELRA and ICCL, Torino, Italia, 2024, pp. 11523–11536.
Towards a Computational Architecture for Co-Constructive Explainable Systems
H. Buschmeier, P. Cimiano, S. Kopp, J. Kornowicz, O. Lammert, M. Matarese, D. Mindlin, A.S. Robrecht, A.-L. Vollmer, P. Wagner, B. Wrede, M. Booshehri, in: Proceedings of the 2024 Workshop on Explainability Engineering, ACM, 2024, pp. 20–25.
Towards a BFO-based ontology of understanding in explanatory interactions
M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 4th International Workshop on Data Meets Applied Ontologies in Explainable AI (DAO-XAI), International Association for Ontology and its Applications, Santiago de Compostela, Spain, 2024.
A model of factors contributing to the success of dialogical explanations
M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 26th ACM International Conference on Multimodal Interaction, ACM, San José, Costa Rica, 2024, pp. 373–381.
Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness
M. Spliethöver, S.N. Menon, H. Wachsmuth, in: L.-W. Ku, A. Martins, V. Srikumar (Eds.), Findings of the Association for Computational Linguistics: ACL 2024, Association for Computational Linguistics, Bangkok, Thailand, 2024, pp. 9294–9313.
Conclusion-based Counter-Argument Generation
M. Alshomary, H. Wachsmuth, in: A. Vlachos, I. Augenstein (Eds.), Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, Dubrovnik, Croatia, 2023, pp. 957–967.
“Mama Always Had a Way of Explaining Things So I Could Understand”: A Dialogue Corpus for Learning to Construct Explanations
H. Wachsmuth, M. Alshomary, in: N. Calzolari, C.-R. Huang, H. Kim, J. Pustejovsky, L. Wanner, K.-S. Choi, P.-M. Ryu, H.-H. Chen, L. Donatelli, H. Ji, S. Kurohashi, P. Paggio, N. Xue, S. Kim, Y. Hahm, Z. He, T.K. Lee, E. Santus, F. Bond, S.-H. Na (Eds.), Proceedings of the 29th International Conference on Computational Linguistics, International Committee on Computational Linguistics, Gyeongju, Republic of Korea, 2022, pp. 344–354.
Generating Contrastive Snippets for Argument Search
M. Alshomary, J. Rieskamp, H. Wachsmuth, in: Proceedings of the 9th International Conference on Computational Models of Argument, 2022, pp. 21–31.
The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments
M. Alshomary, R. El Baff, T. Gurcke, H. Wachsmuth, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022, pp. 8782–8797.
Show all publications