Project INF: Toward a framework for assessing explanation quality

Complementarily to subprojects across research areas A, B, and C, the members of Project INF investigate how the quality and success of an explanation can be measured. For this purpose, the computer scientists will first analyze natural language explanations, such as written records from other projects or explanatory dialogues from internet forums. Based on the findings from this analysis, they will then develop tools and criteria that can be applied to explanations. In addition, the project members will provide TRR researchers with a glossary of key concepts. The aim of this project is to establish an understanding of what makes an explanation successful. With these findings, the researchers will contribute to a comprehensive theory of the co-construction of explanations as elaborated in the TRR.

Research areas: Computer science

Project leaders

Prof. Dr. Philipp Cimiano

More about the person

Prof. Dr. Henning Wachsmuth

More about the person

Staff

Meisam Booshehri

More about the person

Maximilian Spliethöver

More about the person

Support staff

Akshit Bhatia, Paderborn University

Maryam Bahraminejad, Bielefeld University

Alexander Espig, Leibniz University Hannover

Simona Ignatova, Bielefeld University

Felix Lange, Paderborn University

Maryam Nobakht, Bielefeld University

Yaxi Wang, Leibniz University Hannover

Pub­lika­tion­en

“Mama Always Had a Way of Explaining Things So I Could Understand”: A Dialogue Corpus for Learning to Construct Explanations

H. Wachsmuth, M. Alshomary, in: N. Calzolari, C.-R. Huang, H. Kim, J. Pustejovsky, L. Wanner, K.-S. Choi, P.-M. Ryu, H.-H. Chen, L. Donatelli, H. Ji, S. Kurohashi, P. Paggio, N. Xue, S. Kim, Y. Hahm, Z. He, T.K. Lee, E. Santus, F. Bond, S.-H. Na (Eds.), Proceedings of the 29th International Conference on Computational Linguistics, International Committee on Computational Linguistics, Gyeongju, Republic of Korea, 2022, pp. 344–354.


Generating Contrastive Snippets for Argument Search

M. Alshomary, J. Rieskamp, H. Wachsmuth, in: Proceedings of the 9th International Conference on Computational Models of Argument, 2022, pp. 21–31.


The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments

M. Alshomary, R. El Baff, T. Gurcke, H. Wachsmuth, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022, pp. 8782–8797.


Conclusion-based Counter-Argument Generation

M. Alshomary, H. Wachsmuth, in: A. Vlachos, I. Augenstein (Eds.), Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, Dubrovnik, Croatia, 2023, pp. 957–967.


A model of factors contributing to the success of dialogical explanations

M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 26th ACM International Conference on Multimodal Interaction, ACM, San José, Costa Rica, 2024, pp. 373–381.


Towards a BFO-based ontology of understanding in explanatory interactions

M. Booshehri, H. Buschmeier, P. Cimiano, in: Proceedings of the 4th International Workshop on Data Meets Applied Ontologies in Explainable AI (DAO-XAI), International Association for Ontology and its Applications, Santiago de Compostela, Spain, 2024.


Modeling the Quality of Dialogical Explanations

M. Alshomary, F. Lange, M. Booshehri, M. Sengupta, P. Cimiano, H. Wachsmuth, in: N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ELRA and ICCL, Torino, Italia, 2024, pp. 11523–11536.


Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness

M. Spliethöver, S.N. Menon, H. Wachsmuth, in: L.-W. Ku, A. Martins, V. Srikumar (Eds.), Findings of the Association for Computational Linguistics: ACL 2024, Association for Computational Linguistics, Bangkok, Thailand, 2024, pp. 9294–9313.


Towards a Computational Architecture for Co-Constructive Explainable Systems

H. Buschmeier, P. Cimiano, S. Kopp, J. Kornowicz, O. Lammert, M. Matarese, D. Mindlin, A.S. Robrecht, A.-L. Vollmer, P. Wagner, B. Wrede, M. Booshehri, in: Proceedings of the 2024 Workshop on Explainability Engineering, ACM, 2024, pp. 20–25.


Show all publications