Publications

All Pub­lic­a­tions

2026

Structures Underlying Explanations

Jimenez, P., Vollmer, A. L., & Wachsmuth, H. (n.d.). Structures Underlying Explanations. In K. Rohlfing, K. Främling, B. Lim, S. Alpsancar, & K. Thommes (Eds.), Social Explainable AI: Communications of NII Shonan Meetings. Springer Singapore.


Characteristics of nonverbal behavior

Lazarov, S. T., Tchappi, I., & Grimminger, A. (n.d.). Characteristics of nonverbal behavior. In K. J. Rohlfing, K. Främling, S. Alpsancar, K. Thommes, & B. Y. Lim (Eds.), Social Explainable AI. Springer.


Theoretical aspects of multimodal processing

Grimminger, A., & Buschmeier, H. (n.d.). Theoretical aspects of multimodal processing. In K. J. Rohlfing, K. Främling, S. Alpsancar, K. Thommes, & B. Y. Lim (Eds.), Social Explainable AI. Springer.


Timing and synchronization of multimodal signals in explanations

Wagner, P., & Kopp, S. (n.d.). Timing and synchronization of multimodal signals in explanations. In K. J. Rohlfing, K. Främling, S. Alpsancar, K. Thommes, & B. Y. Lim (Eds.), Social Explainable AI. Springer.


Components of an explanation for co-constructive sXAI

Vollmer, A.-L., Buhl, H. M., Alami, R., Främling, K., Grimminger, A., Booshehri, M., & Ngonga Ngomo, A.-C. (n.d.). Components of an explanation for co-constructive sXAI. In K. J. Rohlfing, K. Främling, S. Alpsancar, K. Thommes, & B. Y. Lim (Eds.), Social Explainable AI. Springer.


Incremental communication

Wrede, B., Buschmeier, H., Rohlfing, K. J., Booshehri, M., & Grimminger, A. (n.d.). Incremental communication. In K. J. Rohlfing, K. Främling, S. Alpsancar, K. Thommes, & B. Y. Lim (Eds.), Social Explainable AI. Springer.


Practices: How to establish an explaining practice

Rohlfing, K. J., Vollmer, A.-L., & Grimminger, A. (n.d.). Practices: How to establish an explaining practice. In K. Rohlfing, K. Främling, K. Thommes, S. Alpsancar, & B. Y. Lim (Eds.), Social Explainable AI. Springer.


Social Context in Human-AI Interaction (HAI): A Theoretical Framework Based on Multi-Perspectival Imaginaries

Menne, A. L., & Schulz, C. (n.d.). Social Context in Human-AI Interaction (HAI): A Theoretical Framework Based on Multi-Perspectival Imaginaries. In C. Stephanidis, M. Antona, S. Ntoa, & G. Salvendy (Eds.), HCI International 2026 Posters: 28th International Conference on Human-Computer Interaction, HCI 2026, Montreal, Canada, July 26-31, 2026, Proceedings. Springer International Publishing.


2025

Agency in metaphors of explaining: An analysis of scientific texts

Scharlau, I., & Rohlfing, K. J. (2025). Agency in metaphors of explaining: An analysis of scientific texts. Center for Open Science.


Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation

Visser, R., Peters, T. M., Scharlau, I., & Hammer, B. (2025). Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation. Cognitive Systems Research, Article 101357. https://doi.org/10.1016/j.cogsys.2025.101357


An annotated corpus of elicited metaphors of explaining and understanding using MIPVU

Porwol, P., & Scharlau, I. (2025). An annotated corpus of elicited metaphors of explaining and understanding using MIPVU. OSF. https://doi.org/10.17605/OSF.IO/Y6SMX


Feeds. Ein zentrales Strukturprinzip sozialer Medien

Schulz, C. (n.d.). Feeds. Ein zentrales Strukturprinzip sozialer Medien. In R. Dörre & A. Tuschling (Eds.), Handbuch Social Media: Geschichte – Kultur – Ästhetik (1st ed.). Metzler Verlag.


Speech Synthesis along Perceptual Voice Quality Dimensions

Rautenberg, F., Kuhlmann, M., Seebauer, F., Wiechmann, J., Wagner, P., & Haeb-Umbach, R. (2025). Speech Synthesis along Perceptual Voice Quality Dimensions. ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India . https://doi.org/10.1109/icassp49660.2025.10888012


Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications?

Peters, T. M., & Scharlau, I. (2025). Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications? Frontiers in Psychology, 16. https://doi.org/10.3389/fpsyg.2025.1574809


Algorithm, expert, or both? Evaluating the role of feature selection methods on user preferences and reliance

Kornowicz, J., & Thommes, K. (2025). Algorithm, expert, or both? Evaluating the role of feature selection methods on user preferences and reliance. Plos One. https://doi.org/10.1371/journal.pone.0318874


Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

Fichtel, L., Spliethöver, M., Hüllermeier, E., Jimenez, P., Klowait, N., Kopp, S., Ngonga Ngomo, A.-C., Robrecht, A., Scharlau, I., Terfloth, L., Vollmer, A.-L., & Wachsmuth, H. (2025). Investigating Co-Constructive Behavior of Large Language Models in  Explanation Dialogues. In arXiv:2504.18483.


Die Ambivalenz von Sichtbarkeit. Ethische Perspektiven auf die digitale Transformation

Philippi, M. (2025). Die Ambivalenz von Sichtbarkeit. Ethische Perspektiven auf die digitale Transformation. Sorbische Lebenswelten im digitalen Zeitalter, BTU Cottbus-Senftenberg, Cottbus.


Grenzen des Verstehens

Philippi, M. (2025). Grenzen des Verstehens. Hermeneutik - oder: Was heißt “Verstehen”? Januartagung der Evangelischen Forschungsakademie, Berlin.


On "Super Likes" and Algorithmic (In)Visibilities: Frictions Between Social and Economic Logics in the Context of Social Media Platforms

Schulz, C. (2025). On “Super Likes” and Algorithmic (In)Visibilities: Frictions Between Social and Economic Logics in the Context of Social Media Platforms. Digital Culture & Society , 2/2023, 45–68. https://doi.org/10.14361/dcs-2023-0204


Contrastive Verbal Guidance: A Beneficial Context for Attention To Events and Their Memory?

Singh, A., & Rohlfing, K. J. (2025). Contrastive Verbal Guidance: A Beneficial Context for Attention To Events and Their Memory? Cognitive Science, 49(8), Article e70096. https://doi.org/10.1111/cogs.70096


Synthesizing Speech with Selected Perceptual Voice Qualities – A Case Study with Creaky Voice

Rautenberg, F., Seebauer, F., Wiechmann, J., Kuhlmann, M., Wagner, P., & Haeb-Umbach, R. (2025). Synthesizing Speech with Selected Perceptual Voice Qualities – A Case Study with Creaky Voice. Interspeech 2025. Interspeech, Rotterdam. https://doi.org/10.21437/Interspeech.2025-1443


Would I regret being different? The influence of social norms on attitudes toward AI usage

Kornowicz, J., Pape, M., & Thommes, K. (2025). Would I regret being different? The influence of social norms on attitudes toward AI usage. Arxiv. https://doi.org/10.48550/ARXIV.2509.04241


Challenges and Limits in Explaining and Acoustic Modeling of Voice Characteristics

Wiechmann, J., & Wagner, P. (2025). Challenges and Limits in Explaining and Acoustic Modeling of Voice Characteristics. Journal of Voice. https://doi.org/10.1016/j.jvoice.2025.07.036


Intra-individual variability in TVA attentional capacity and weight distribution: A reanalysis across days and an experiment within-day

Banh, N. C., & Scharlau, I. (2025). Intra-individual variability in TVA attentional capacity and weight distribution: A reanalysis across days and an experiment within-day. Center for Open Science.


Understanding personal agency through metaphor, or Why academic writing is (not) like a roller-coaster ride

Karsten, A. (2025). Understanding personal agency through metaphor, or Why academic writing is (not) like a roller-coaster ride. Frontiers in Language Sciences, 4, Article 1567498. https://doi.org/10.3389/flang.2025.1567498


Assessing healthy distrust in human-AI interaction: Interpreting changes in visual attention.

Peters, T. M., Biermeier, K., & Scharlau, I. (2025). Assessing healthy distrust in human-AI interaction: Interpreting changes in visual attention. Center for Open Science.


Applications of video-recall for the assessment of understanding and knowledge in explanatory contexts

Lazarov, S. T., Schaffer, M., Gladow, V., Buschmeier, H., Grimminger, A., & Buhl, H. M. (n.d.). Applications of video-recall for the assessment of understanding and knowledge in explanatory contexts.


Acoustic detection of false positive backchannels of understanding in explanations

Türk, O., Lazarov, S. T., Buschmeier, H., Wagner, P., & Grimminger, A. (2025). Acoustic detection of false positive backchannels of understanding in explanations. LingCologne 2025 – Book of Abstracts, 36.


Investigating the Impact of Conceptual Metaphors on LLM-based NLI through Shapley Interactions

Sengupta, M., Muschalik, M., Fumagalli, F., Hammer, B., Hüllermeier, E., Ghosh, D., & Wachsmuth, H. (2025). Investigating the Impact of Conceptual Metaphors on LLM-based NLI through Shapley Interactions. Accepted in Findings . Empirical Methods in Natural Language Processing (EMNLP 2025).


Dung’s Argumentation Framework: Unveiling the Expressive Power with Inconsistent Databases

Mahmood, Y., Hecher, M., & Ngonga Ngomo, A.-C. (2025). Dung’s Argumentation Framework: Unveiling the Expressive Power with Inconsistent Databases. Proceedings of the AAAI Conference on Artificial Intelligence, 39(14), 15058–15066. https://doi.org/10.1609/aaai.v39i14.33651


Logics with probabilistic team semantics and the Boolean negation

Hannula, M., Hirvonen, M., Kontinen, J., Mahmood, Y., Meier, A., & Virtema, J. (2025). Logics with probabilistic team semantics and the Boolean negation. Journal of Logic and Computation, 35(3). https://doi.org/10.1093/logcom/exaf021


Facets in Argumentation: A Formal Approach to Argument Significance

Fichte, J., Fröhlich, N., Hecher, M., Lagerkvist, V., Mahmood, Y., Meier, A., & Persson, J. (2025). Facets in Argumentation: A Formal Approach to Argument Significance. In arXiv:2505.10982.


A BFO-based ontological analysis of entities in Social XAI

Booshehri, M., Buschmeier, H., & Cimiano, P. (2025). A BFO-based ontological analysis of entities in Social XAI. In Proceedings of the 15th International Conference on Formal Ontology in Information Systems (pp. 255–268). IOS Press. https://doi.org/10.3233/faia250498


Assessing AI Literacy: A Systematic Review of Questionnaires with Emphasis on Affective, Behavioral, Cognitive, and Ethical Aspects

Lenke, M., Klowait, N., Biere, L., & Schulte, C. (2025). Assessing AI Literacy: A Systematic Review of Questionnaires with Emphasis on Affective, Behavioral, Cognitive, and Ethical Aspects. In Lecture Notes in Computer Science. Springer Nature Switzerland. https://doi.org/10.1007/978-3-032-01222-7_8


The presentation of self in the age of ChatGPT

Klowait, N., & Erofeeva, M. (2025). The presentation of self in the age of ChatGPT. Frontiers in Sociology, 10, Article 1614473. https://doi.org/10.3389/fsoc.2025.1614473


Exact Computation of Any-Order Shapley Interactions for Graph Neural Networks

Muschalik, M., Fumagalli, F., Frazzetto, P., Strotherm, J., Hermes, L., Sperduti, A., Hüllermeier, E., & Hammer, B. (2025). Exact Computation of Any-Order Shapley Interactions for Graph Neural Networks. The Thirteenth International Conference on Learning Representations (ICLR).


Explaining Outliers using Isolation Forest and Shapley Interactions

Visser, R., Fumagalli, F., Hüllermeier, E., & Hammer, B. (2025). Explaining Outliers using Isolation Forest and Shapley Interactions. Proceedings of the European Symposium on Artificial Neural Networks (ESANN).


Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory

Fumagalli, F., Muschalik, M., Hüllermeier, E., Hammer, B., & Herbinger, J. (2025). Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics (AISTATS), 258, 5140–5148.


A BFO-based ontology of context for Social XAI

Booshehri, M., Buschmeier, H., & Cimiano, P. (2025). A BFO-based ontology of context for Social XAI. Abstracts of the 3rd TRR 318 Conference: Contextualizing Explanations. 3rd TRR 318 Conference: Contextualizing Explanations, Bielefeld, Germany.


Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

Fichtel, L., Spliethöver, M., Hüllermeier, E., Jimenez, P., Klowait, N., Kopp, S., Ngonga Ngomo, A.-C., Robrecht, A., Scharlau, I., Terfloth, L., Vollmer, A.-L., & Wachsmuth, H. (n.d.). Investigating Co-Constructive Behavior of Large Language Models in  Explanation Dialogues. Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Annual Meeting of the Special Interest Group on Discourse and Dialogue.


Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection

Spliethöver, M., Knebler, T., Fumagalli, F., Muschalik, M., Hammer, B., Hüllermeier, E., & Wachsmuth, H. (2025). Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection. In L. Chiruzzo, A. Ritter, & L. Wang (Eds.), Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (pp. 2421–2449). Association for Computational Linguistics.


SemDial 2025 – Bialogue. Proceedings of the 29th Workshop on the Semantics and Pragmatics of Dialogue

Ilinykh, N., Robrecht, A., Kopp, S., & Buschmeier, H. (Eds.). (2025). SemDial 2025 – Bialogue. Proceedings of the 29th Workshop on the Semantics and Pragmatics of Dialogue.


“I'm Actually More Interested in AI Than in Computer Science” - 12-Year-Olds Describing Their First Encounter with AI

Lenke, M., Lehner, L., & Landman, M. (2025). “I’m Actually More Interested in AI Than in Computer Science” - 12-Year-Olds Describing Their First Encounter with AI. 2025 IEEE Global Engineering Education Conference (EDUCON). https://doi.org/10.1109/educon62633.2025.11016657


Enhancing AI Interaction through Co-Construction: A Multi-Faceted Workshop Framework

Lenke, M., & Schulte, C. (2025). Enhancing AI Interaction through Co-Construction: A Multi-Faceted Workshop Framework. 2025 IEEE Global Engineering Education Conference (EDUCON). https://doi.org/10.1109/educon62633.2025.11016326


The Dual Nature as a Local Context to Explore Verbal Behaviour in Game Explanations

Fisher, J. B., & Terfloth, L. (2025). The Dual Nature as a Local Context to Explore Verbal Behaviour in Game Explanations. Proceedings of the 29th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2025).


Metaphors in 24 WIRED Level 5 Videos (Data corpus)

Scharlau, I., & Miriam, K. (2025). Metaphors in 24 WIRED Level 5 Videos (Data corpus). OSF. https://doi.org/10.17605/OSF.IO/94A2J


The power of combined modalities in interactive robot learning

Beierling, H., Beierling, R., & Vollmer, A.-L. (2025). The power of combined modalities in interactive robot learning. Frontiers in Robotics and AI, 12.


Human-Interactive Robot Learning: Definition, Challenges, and Recommendations

Baraka, K., Idrees, I., Faulkner, T. K., Biyik, E., Booth, S., Chetouani, M., Grollman, D. H., Saran, A., Senft, E., Tulli, S., Vollmer, A.-L., Andriella, A., Beierling, H., Horter, T., Kober, J., Sheidlower, I., Taylor, M. E., van Waveren, S., & Xiao, X. (n.d.). Human-Interactive Robot Learning: Definition, Challenges, and Recommendations. Transactions on Human-Robot Interaction.


Role Perception Questionnaire: Co-construction. Scales manual

Buhl, H. M., Fisher, J. B., & Rohlfing, K. J. (2025). Role Perception Questionnaire: Co-construction. Scales manual. OSF.


Civic Vice in Digital Governance

Reijers, W. (2025). Civic Vice in Digital Governance. In Public Governance and Emerging Technologies. Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-84748-6_14


Introduction to the Ethics of Emerging Technologies

Reijers, W., Thomas Young, M., & Coeckelbergh, M. (2025). Introduction to the Ethics of Emerging Technologies.


How to Govern the Confidence Machine?

de Filippi, P., Mannan, M., & Reijers, W. (2025). How to Govern the Confidence Machine? Regulation & Governance. https://doi.org/10.1111/rego.70017


Implementing a computational cognitive process model of medical diagnostic reasoning

Battefeld, D., & Kopp, S. (2025). Implementing a computational cognitive process model of medical diagnostic reasoning. Proceedings of KogWis 2025: Conference of the German Cognitive Science Society. Flexible Minds: Situated and Comparative Perspectives (KogWis 2025), Bochum, Germany.


Manners Matter: Action history guides attention and repair choices during interaction

Singh, A., & Rohlfing, K. J. (2025). Manners Matter: Action history guides attention and repair choices during interaction. IEEE International Conference on Development and Learning (ICDL). IEEE International Conference on Development and Learning (ICDL), Prague. https://doi.org/10.31234/osf.io/yn2we_v1


MUNDEX Annotations

Buschmeier, H., Grimminger, A., Wagner, P., Lazarov, S. T., Türk, O., & Wang, Y. (2025). MUNDEX Annotations. LibreCat University. https://doi.org/10.5281/ZENODO.17129817


TRR 318, Project A01, WP 2.1. Scales manual

Buhl, H. M., Herrmann, P., & Bolinger, D. X. (2025). TRR 318, Project A01, WP 2.1. Scales manual. OSF.


TRR 318, Project A01, WP 2.2. Scales manual

Buhl, H. M., Herrmann, P., & Bolinger, D. X. (2025). TRR 318, Project A01, WP 2.2. Scales manual. OSF.


Embedding Psycholinguistics: An Interactive Framework for Studying Language in Action

Singh, A., & Rohlfing, K. J. (2025). Embedding Psycholinguistics: An Interactive Framework for Studying Language in Action. 6th Biannual Conference of the German Society for Cognitive Science, Bochum, Germany. 6th Biannual Conference of the German Society for Cognitive Science, Bochum, Germany, Bochum. https://doi.org/10.17605/OSF.IO/8PR23


MSL: Multi-class Scoring Lists for Interpretable Incremental Decision-Making

Heid, S., Kornowicz, J., Hanselle, J., Thommes, K., & Hüllermeier, E. (2025). MSL: Multi-class Scoring Lists for Interpretable Incremental Decision-Making. In Communications in Computer and Information Science. Springer Nature Switzerland. https://doi.org/10.1007/978-3-032-08327-2_6


Are numerical or verbal explanations of AI the key to appropriate user reliance and error detection?

Papenkordt, J., Ngonga Ngomo, A.-C., & Thommes, K. (2025). Are numerical or verbal explanations of AI the key to appropriate user reliance and error detection? Behaviour & Information Technology, 1–22. https://doi.org/10.1080/0144929x.2025.2568928


Why not? Developing ABox Abduction beyond Repairs

Haak, A., Koopmann, P., Mahmood, Y., & Turhan, A.-Y. (2025). Why not? Developing ABox Abduction beyond Repairs. In arXiv:2507.21955.


Can AI Regulate Your Emotions? An Empirical Investigation of the Influence of AI Explanations and Emotion Regulation on Human Decision-Making Factors

Lammert, O. (2025). Can AI Regulate Your Emotions? An Empirical Investigation of the Influence of AI Explanations and Emotion Regulation on Human Decision-Making Factors. In Communications in Computer and Information Science. Springer Nature Switzerland. https://doi.org/10.1007/978-3-032-08333-3_11


Is explaining more like showing or more like building? Agency in metaphors of explaining

Porwol, P. F., & Scharlau, I. (2025). Is explaining more like showing or more like building? Agency in metaphors of explaining. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2025.1628706


An Empirical Examination of the Evaluative AI Framework

Kornowicz, J. (2025). An Empirical Examination of the Evaluative AI Framework. International Journal of Human–Computer Interaction, 1–19. https://doi.org/10.1080/10447318.2025.2581260


Healthy Distrust in AI systems

Paaßen, B., Alpsancar, S., Matzner, T., & Scharlau, I. (2025). Healthy Distrust in AI systems. In arXiv.



Explainability and AI Governance

Reijers, W., Matzner, T., & Alpsancar, S. (2025). Explainability and AI Governance. In M. Farina, X. Yu, & J. Chen (Eds.), Digital Development. Technology, Ethics and Governance. Routledge. https://doi.org/10.4324/9781003567622-22


The reflection of interactional monitoring in the dynamics of verbal and nonverbal forms of explaining

Lazarov, S. T. (2025). The reflection of interactional monitoring in the dynamics of verbal and nonverbal forms of explaining. Universitätsbibliothek Paderborn. https://doi.org/10.17619/UNIPB/1-2446


Forms of Understanding for XAI-Explanations

Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher, J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer, V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang, Y., Wilmes, A., & Wrede, B. (2025). Forms of Understanding for XAI-Explanations. Cognitive Systems Research, 94, Article 101419. https://doi.org/10.1016/j.cogsys.2025.101419


What do metaphors of understanding hide?

Porwol, P. F., & Scharlau, I. (2025). What do metaphors of understanding hide? STUDIA NEOFILOLOGICZ: NEROZPRAWY JĘZYKOZNAWCZE (Modern Language Studies: Linguistic Essays), XXI, 181–198.


2024

Humans in XAI: Increased Reliance in Decision-Making Under Uncertainty by Using Explanation Strategies

Lammert, O., Richter, B., Schütze, C., Thommes, K., & Wrede, B. (2024). Humans in XAI: Increased Reliance in Decision-Making Under Uncertainty by Using Explanation Strategies. Frontiers in Behavioral Economics. https://doi.org/10.3389/frbhe.2024.1377075


Effects of task difficulty on visual processing speed

Banh, N. C., & Scharlau, I. (2024). Effects of task difficulty on visual processing speed. Tagung experimentell arbeitender Psycholog:innen (TeaP), Regensburg.


Learning decision catalogues for situated decision making: The case of scoring systems

Heid, S., Hanselle, J. M., Fürnkranz, J., & Hüllermeier, E. (2024). Learning decision catalogues for situated decision making: The case of scoring systems. International Journal of Approximate Reasoning, 171, Article 109190. https://doi.org/10.1016/j.ijar.2024.109190


Vernakulärer Code oder die Geister, die der Algorithmus rief - digitale Schriftlichkeit im Kontext von sozialen Medienplattformen

Schulz, C. (2024). Vernakulärer Code oder die Geister, die der Algorithmus rief - digitale Schriftlichkeit im Kontext von sozialen Medienplattformen. In M. Bartelmus & A. Nebrig (Eds.), Digitale Schriftlichkeit – Progammieren, Prozessieren und Codieren von Schrift (1st ed.). transcript . https://doi.org/10.1515/9783839468135-009


Integrating Representational Gestures into Automatically Generated Embodied Explanations and its Effects on Understanding and Interaction Quality

Robrecht, A., Voss, H., Gottschalk, L., & Kopp, S. (2024). Integrating Representational Gestures into Automatically Generated  Embodied Explanations and its Effects on Understanding and Interaction  Quality. In arXiv:2406.12544.


Human Emotions in AI Explanations

Thommes, K., Lammert, O., Schütze, C., Richter, B., & Wrede, B. (2024). Human Emotions in AI Explanations. Communications in Computer and Information Science. https://doi.org/10.1007/978-3-031-63803-9_15


Analyzing the Use of Metaphors in News Editorials for Political Framing

Sengupta, M., El Baff, R., Alshomary, M., & Wachsmuth, H. (2024). Analyzing the Use of Metaphors in News Editorials for Political Framing. In K. Duh, H. Gomez, & S. Bethard (Eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (pp. 3621–3631). Association for Computational Linguistics.


Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen

Alpsancar, S. (2024). Warum und wozu erklärbare KI? Über die Verschiedenheit dreier paradigmatischer Zwecksetzungen. In R. Adolphi, S. Alpsancar, S. Hahn, & M. Kettner (Eds.), Philosophische Digitalisierungsforschung  Verantwortung, Verständigung, Vernunft, Macht (pp. 55–113). transcript.


A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support

Liedeker, F., Sanchez-Graillet, O., Seidler, M., Brandt, C., Wellmer, J., & Cimiano, P. (n.d.). A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support. First Workshop on Natural Language Argument-Based Explanations, Santiago de Compostela, Spain.


An Empirical Investigation of Users' Assessment of XAI Explanations: Identifying the Sweet-Spot of Explanation Complexity

Liedeker, F., Düsing, C., Nieveler, M., & Cimiano, P. (2024). An Empirical Investigation of Users’ Assessment of XAI Explanations: Identifying the Sweet-Spot of Explanation Complexity. 2nd World Conference on eXplainable Artificial Intelligence, Valetta, Malta.


ASCODI: An XAI-based interactive reasoning support system for justifiable medical diagnosing

Battefeld, D., Liedeker, F., Cimiano, P., & Kopp, S. (2024). ASCODI: An XAI-based interactive reasoning support system for justifiable medical diagnosing. Proceedings of the 1st Workshop on Multimodal, Affective and Interactive EXplainable AI (MAI-XAI). European Conference on Artificial Intelligence (ECAI), Santiago de Compostela, Spain.


Coupling of Task and Partner Model: Investigating the Intra-Individual Variability in Gaze during Human–Robot Explanatory Dialogue

Singh, A., & Rohlfing, K. J. (2024). Coupling of Task and Partner Model: Investigating the Intra-Individual Variability in Gaze during Human–Robot Explanatory Dialogue. Proceedings of 26th ACM International Conference on Multimodal Interaction (ICMI 2024). 26th ACM International Conference on Multimodal Interaction (ICMI 2024), San Jose, Costa Rica. https://doi.org/10.1145/3686215.3689202


Changes in partner models – Effects of adaptivity in the course of explanations

Buhl, H. M., Fisher, J. B., & Rohlfing, K. (2024). Changes in partner models – Effects of adaptivity in the course of explanations. Proceedings of the Annual Meeting of the Cognitive Science Society, 46.


Vom foto-sozialen Graph zum Story-Format: Über die Institutionalisierung sozialmedialer Infrastruktur aus dem Geiste der Fotografie

Schulz, C. (2024). Vom foto-sozialen Graph zum Story-Format: Über die Institutionalisierung sozialmedialer Infrastruktur aus dem Geiste der Fotografie. In A. Schürmann & K. Yacavone (Eds.), Die Fotografie und ihre Institutionen. Von der Lehrsammlung zum Bundesinstitut (1st ed.). Reimer Verlag. https://doi.org/doi.org/10.5771/9783496030980


Static Socio-demographic and Individual Factors for Generating Explanations in XAI: Can they serve as a prior in DSS for adaptation of explanation strategies?

Schütze, C., Richter, B., Lammert, O., Thommes, K., & Wrede, B. (2024). Static Socio-demographic and Individual Factors for Generating Explanations in XAI: Can they serve as a prior in DSS for adaptation of explanation strategies? HAI ’24: Proceedings of the 12th International Conference on Human-Agent Interaction, 141–149. https://doi.org/10.1145/3687272.3688300


Learning decision catalogues for situated decision making: The case of scoring systems

Heid, S., Hanselle, J. M., Fürnkranz, J., & Hüllermeier, E. (2024). Learning decision catalogues for situated decision making: The case of scoring systems. International Journal of Approximate Reasoning, 171, Article 109190. https://doi.org/10.1016/j.ijar.2024.109190


Human Emotions in AI Explanations

Thommes, K., Lammert, O., Schütze, C., Richter, B., & Wrede, B. (2024). Human Emotions in AI Explanations.


Variations in explainers’ gesture deixis in explanations related to the monitoring of explainees’ understanding

Lazarov, S. T., & Grimminger, A. (2024). Variations in explainers’ gesture deixis in explanations related to the monitoring of explainees’ understanding. Proceedings of the Annual Meeting of the Cognitive Science Society, 46.


Perception and Consideration of the Explainees’ Needs for Satisfying Explanations

Schaffer, M. E., Terfloth, L., Schulte, C., & Buhl, H. M. (2024). Perception and Consideration of the Explainees’ Needs for Satisfying Explanations. 2nd World Conference on eXplainable Artificial Intelligence, Valletta, Malta.


Explainers’ Mental Representations of Explainees’ Needs in Everyday Explanations

Schaffer, M. E., Terfloth, L., Schulte, C., & Buhl, H. M. (2024). Explainers’ Mental Representations of Explainees’ Needs in Everyday Explanations. Joint Proceedings of the XAI-2024 Late-Breaking Work, Demos and Doctoral Consortium. 3793.


Benefiting from Binary Negations? Verbal Negations Decrease Visual Attention and Balance Its Distribution

Banh, N. C., Tünnermann, J., Rohlfing, K. J., & Scharlau, I. (2024). Benefiting from Binary Negations? Verbal Negations Decrease Visual Attention and Balance Its Distribution. Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1451309


Human-AI Co-Construction of Interpretable Predictive Models: The Case of Scoring Systems

Heid, S., Kornowicz, J., Hanselle, J. M., Hüllermeier, E., & Thommes, K. (2024). Human-AI Co-Construction of Interpretable Predictive Models: The Case of Scoring Systems. PROCEEDINGS 34. WORKSHOP COMPUTATIONAL INTELLIGENCE, 21, 233.


Modeling the Quality of Dialogical Explanations

Alshomary, M., Lange, F., Booshehri, M., Sengupta, M., Cimiano, P., & Wachsmuth, H. (2024). Modeling the Quality of Dialogical Explanations. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 11523–11536). ELRA and ICCL.


AI explainability, temporality, and civic virtue

Reijers, W., Matzner, T., Alpsancar, S., & Philippi, M. (2024). AI explainability, temporality, and civic virtue. Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.


Unpacking the purposes of explainable AI

Alpsancar, S., Matzner, T., & Philippi, M. (2024). Unpacking the purposes of explainable AI. Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT, 31–35.


When to use a metaphor: Metaphors in dialogical explanations with addressees of different expertise

Scharlau, I., Körber, M., Sengupta, M., & Wachsmuth, H. (2024). When to use a metaphor: Metaphors in dialogical explanations with addressees of different expertise. Frontiers in Language Sciences, 3, 1474924.


Towards a Computational Architecture for Co-Constructive Explainable Systems

Buschmeier, H., Cimiano, P., Kopp, S., Kornowicz, J., Lammert, O., Matarese, M., Mindlin, D., Robrecht, A. S., Vollmer, A.-L., Wagner, P., Wrede, B., & Booshehri, M. (2024). Towards a Computational Architecture for Co-Constructive Explainable Systems. Proceedings of the 2024 Workshop on Explainability Engineering, 20–25. https://doi.org/10.1145/3648505.3648509


Changes in the topical structure of explanations are related to explainees’ multimodal behaviour

Lazarov, S. T., Biermeier, K., & Grimminger, A. (2024). Changes in the topical structure of explanations are related to explainees’ multimodal behaviour. Interaction Studies, 25(3), 257–280. https://doi.org/10.1075/is.23033.laz


The mental representation of the object of explanation in the process of co-constructive explanations

Schaffer, M., & Buhl, H. M. (2024). The mental representation of the object of explanation in the process of co-constructive explanations. In U. Ansorge, B. Szaszkó, & L. Werner (Eds.), 53rd DGPs Congress - Abstracts.


Von der Kunst, die richtigen Fragen zu stellen. Das Potential der Phänomenologie für die Technikfolgenabschätzung

Philippi, M. (2024). Von der Kunst, die richtigen Fragen zu stellen. Das Potential der Phänomenologie für die Technikfolgenabschätzung. TA24. TA24: Methoden für die Technikfolgenabschätzung – im Spannungsfeld zwischen bewährter Praxis und neuen Möglichkeiten, ÖAW Wien.


Show all publications

Journ­al Art­icles

2025

Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation

Visser, R., Peters, T. M., Scharlau, I., & Hammer, B. (2025). Trust, distrust, and appropriate reliance in (X)AI: A conceptual clarification of user trust and survey of its empirical evaluation. Cognitive Systems Research, Article 101357. https://doi.org/10.1016/j.cogsys.2025.101357


Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications?

Peters, T. M., & Scharlau, I. (2025). Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications? Frontiers in Psychology, 16. https://doi.org/10.3389/fpsyg.2025.1574809


Algorithm, expert, or both? Evaluating the role of feature selection methods on user preferences and reliance

Kornowicz, J., & Thommes, K. (2025). Algorithm, expert, or both? Evaluating the role of feature selection methods on user preferences and reliance. Plos One. https://doi.org/10.1371/journal.pone.0318874


On "Super Likes" and Algorithmic (In)Visibilities: Frictions Between Social and Economic Logics in the Context of Social Media Platforms

Schulz, C. (2025). On “Super Likes” and Algorithmic (In)Visibilities: Frictions Between Social and Economic Logics in the Context of Social Media Platforms. Digital Culture & Society , 2/2023, 45–68. https://doi.org/10.14361/dcs-2023-0204


Contrastive Verbal Guidance: A Beneficial Context for Attention To Events and Their Memory?

Singh, A., & Rohlfing, K. J. (2025). Contrastive Verbal Guidance: A Beneficial Context for Attention To Events and Their Memory? Cognitive Science, 49(8), Article e70096. https://doi.org/10.1111/cogs.70096


Would I regret being different? The influence of social norms on attitudes toward AI usage

Kornowicz, J., Pape, M., & Thommes, K. (2025). Would I regret being different? The influence of social norms on attitudes toward AI usage. Arxiv. https://doi.org/10.48550/ARXIV.2509.04241


Challenges and Limits in Explaining and Acoustic Modeling of Voice Characteristics

Wiechmann, J., & Wagner, P. (2025). Challenges and Limits in Explaining and Acoustic Modeling of Voice Characteristics. Journal of Voice. https://doi.org/10.1016/j.jvoice.2025.07.036


Understanding personal agency through metaphor, or Why academic writing is (not) like a roller-coaster ride

Karsten, A. (2025). Understanding personal agency through metaphor, or Why academic writing is (not) like a roller-coaster ride. Frontiers in Language Sciences, 4, Article 1567498. https://doi.org/10.3389/flang.2025.1567498


Dung’s Argumentation Framework: Unveiling the Expressive Power with Inconsistent Databases

Mahmood, Y., Hecher, M., & Ngonga Ngomo, A.-C. (2025). Dung’s Argumentation Framework: Unveiling the Expressive Power with Inconsistent Databases. Proceedings of the AAAI Conference on Artificial Intelligence, 39(14), 15058–15066. https://doi.org/10.1609/aaai.v39i14.33651


Logics with probabilistic team semantics and the Boolean negation

Hannula, M., Hirvonen, M., Kontinen, J., Mahmood, Y., Meier, A., & Virtema, J. (2025). Logics with probabilistic team semantics and the Boolean negation. Journal of Logic and Computation, 35(3). https://doi.org/10.1093/logcom/exaf021


The presentation of self in the age of ChatGPT

Klowait, N., & Erofeeva, M. (2025). The presentation of self in the age of ChatGPT. Frontiers in Sociology, 10, Article 1614473. https://doi.org/10.3389/fsoc.2025.1614473


The power of combined modalities in interactive robot learning

Beierling, H., Beierling, R., & Vollmer, A.-L. (2025). The power of combined modalities in interactive robot learning. Frontiers in Robotics and AI, 12.


Human-Interactive Robot Learning: Definition, Challenges, and Recommendations

Baraka, K., Idrees, I., Faulkner, T. K., Biyik, E., Booth, S., Chetouani, M., Grollman, D. H., Saran, A., Senft, E., Tulli, S., Vollmer, A.-L., Andriella, A., Beierling, H., Horter, T., Kober, J., Sheidlower, I., Taylor, M. E., van Waveren, S., & Xiao, X. (n.d.). Human-Interactive Robot Learning: Definition, Challenges, and Recommendations. Transactions on Human-Robot Interaction.


How to Govern the Confidence Machine?

de Filippi, P., Mannan, M., & Reijers, W. (2025). How to Govern the Confidence Machine? Regulation & Governance. https://doi.org/10.1111/rego.70017


Are numerical or verbal explanations of AI the key to appropriate user reliance and error detection?

Papenkordt, J., Ngonga Ngomo, A.-C., & Thommes, K. (2025). Are numerical or verbal explanations of AI the key to appropriate user reliance and error detection? Behaviour & Information Technology, 1–22. https://doi.org/10.1080/0144929x.2025.2568928


Is explaining more like showing or more like building? Agency in metaphors of explaining

Porwol, P. F., & Scharlau, I. (2025). Is explaining more like showing or more like building? Agency in metaphors of explaining. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2025.1628706


An Empirical Examination of the Evaluative AI Framework

Kornowicz, J. (2025). An Empirical Examination of the Evaluative AI Framework. International Journal of Human–Computer Interaction, 1–19. https://doi.org/10.1080/10447318.2025.2581260



Forms of Understanding for XAI-Explanations

Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher, J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer, V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang, Y., Wilmes, A., & Wrede, B. (2025). Forms of Understanding for XAI-Explanations. Cognitive Systems Research, 94, Article 101419. https://doi.org/10.1016/j.cogsys.2025.101419


What do metaphors of understanding hide?

Porwol, P. F., & Scharlau, I. (2025). What do metaphors of understanding hide? STUDIA NEOFILOLOGICZ: NEROZPRAWY JĘZYKOZNAWCZE (Modern Language Studies: Linguistic Essays), XXI, 181–198.


2024

Humans in XAI: Increased Reliance in Decision-Making Under Uncertainty by Using Explanation Strategies

Lammert, O., Richter, B., Schütze, C., Thommes, K., & Wrede, B. (2024). Humans in XAI: Increased Reliance in Decision-Making Under Uncertainty by Using Explanation Strategies. Frontiers in Behavioral Economics. https://doi.org/10.3389/frbhe.2024.1377075


Learning decision catalogues for situated decision making: The case of scoring systems

Heid, S., Hanselle, J. M., Fürnkranz, J., & Hüllermeier, E. (2024). Learning decision catalogues for situated decision making: The case of scoring systems. International Journal of Approximate Reasoning, 171, Article 109190. https://doi.org/10.1016/j.ijar.2024.109190


Changes in partner models – Effects of adaptivity in the course of explanations

Buhl, H. M., Fisher, J. B., & Rohlfing, K. (2024). Changes in partner models – Effects of adaptivity in the course of explanations. Proceedings of the Annual Meeting of the Cognitive Science Society, 46.


Learning decision catalogues for situated decision making: The case of scoring systems

Heid, S., Hanselle, J. M., Fürnkranz, J., & Hüllermeier, E. (2024). Learning decision catalogues for situated decision making: The case of scoring systems. International Journal of Approximate Reasoning, 171, Article 109190. https://doi.org/10.1016/j.ijar.2024.109190


Benefiting from Binary Negations? Verbal Negations Decrease Visual Attention and Balance Its Distribution

Banh, N. C., Tünnermann, J., Rohlfing, K. J., & Scharlau, I. (2024). Benefiting from Binary Negations? Verbal Negations Decrease Visual Attention and Balance Its Distribution. Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1451309


When to use a metaphor: Metaphors in dialogical explanations with addressees of different expertise

Scharlau, I., Körber, M., Sengupta, M., & Wachsmuth, H. (2024). When to use a metaphor: Metaphors in dialogical explanations with addressees of different expertise. Frontiers in Language Sciences, 3, 1474924.


Changes in the topical structure of explanations are related to explainees’ multimodal behaviour

Lazarov, S. T., Biermeier, K., & Grimminger, A. (2024). Changes in the topical structure of explanations are related to explainees’ multimodal behaviour. Interaction Studies, 25(3), 257–280. https://doi.org/10.1075/is.23033.laz


Explain with, rather than explain to: How explainees shape their own learning

Fisher, J. B., Rohlfing, K. J., Donnellan, E., Grimminger, A., Gu, Y., & Vigliocco, G. (2024). Explain with, rather than explain to: How explainees shape their own learning. Interaction Studies, 25(2), 244–255. https://doi.org/doi.org/10.1075/is.23019.fis


Voices in Dialogue: Taking Polyphony in Academic Writing Seriously

Karsten, A. (2024). Voices in Dialogue: Taking Polyphony in Academic Writing Seriously. Written Communication, 41(1), 6–36. https://doi.org/10.1177/07410883231207104


Can AI explain AI? Interactive co-construction of explanations among human and artificial agents

Klowait, N., Erofeeva, M., Lenke, M., Horwath, I., & Buschmeier, H. (2024). Can AI explain AI? Interactive co-construction of explanations among human and artificial agents. Discourse & Communication, 18(6), 917–930. https://doi.org/10.1177/17504813241267069


An Empirical Examination of the Evaluative AI Framework

Kornowicz, J. (2024). An Empirical Examination of the Evaluative AI Framework. ArXiv. https://doi.org/10.48550/ARXIV.2411.08583


Explainable AI for Audio and Visual Affective Computing: A Scoping Review

Johnson, D., Hakobyan, O., Paletschek, J., & Drimalla, H. (2024). Explainable AI for Audio and Visual Affective Computing: A Scoping Review. IEEE Transactions on Affective Computing, 16(2), 518–536. https://doi.org/10.1109/taffc.2024.3505269


What you need to know about a learning robot: Identifying the enabling architecture of complex systems

Beierling, H., Richter, P., Brandt, M., Terfloth, L., Schulte, C., Wersing, H., & Vollmer, A.-L. (2024). What you need to know about a learning robot: Identifying the enabling  architecture of complex systems. Cognitive Systems Research, 88.


2023

Aggregating Human Domain Knowledge for Feature Ranking

Kornowicz, J., & Thommes, K. (2023). Aggregating Human Domain Knowledge for Feature Ranking. Artificial Intelligence in HCI. https://doi.org/10.1007/978-3-031-35891-3_7


RISE: an open-source architecture for interdisciplinary and reproducible human–robot interaction research

Groß, A., Schütze, C., Brandt, M., Wrede, B., & Richter, B. (2023). RISE: an open-source architecture for interdisciplinary and reproducible human–robot interaction research. Frontiers in Robotics and AI, 10. https://doi.org/10.3389/frobt.2023.1245501


From mental models to algorithmic imaginaries to co-constructive mental models

Schulz, C. (2023). From mental models to algorithmic imaginaries to co-constructive mental models. Navigationen – Zeitschrift Für Medien- Und Kulturwissenschaften , 2, 65–75. http://dx.doi.org/10.25819/ubsi/10428


Tech/Imaginations – Introduction

Schulz, C., & Schröter, J. (2023). Tech/Imaginations – Introduction. Navigationen – Zeitschrift Für Medien- Und Kulturwissenschaften , 2, 7–14. http://dx.doi.org/10.25819/ubsi/10428


A new algorithmic imaginary

Schulz, C. (2023). A new algorithmic imaginary. Media, Culture & Society, 45(3), 646–655. https://doi.org/10.1177/01634437221136014


EEG Correlates of Distractions and Hesitations in Human–Robot Interaction: A LabLinking Pilot Study

Richter, B., Putze, F., Ivucic, G., Brandt, M., Schütze, C., Reisenhofer, R., Wrede, B., & Schultz, T. (2023). EEG Correlates of Distractions and Hesitations in Human–Robot Interaction: A LabLinking Pilot Study. Multimodal Technologies and Interaction, 7(4), Article 37. https://doi.org/10.3390/mti7040037


Does Explainability Require Transparency?

Esposito, E. (2023). Does Explainability Require Transparency? Sociologica, 16(3), 17–27. https://doi.org/10.6092/ISSN.1971-8853/15804


Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction

Esposito, E. (2023). Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction. Sociologica, 16(3), 1–4. https://doi.org/10.6092/ISSN.1971-8853/16265


On the Multimodal Resolution of a Search Sequence in Virtual Reality

Klowait, N. (2023). On the Multimodal Resolution of a Search Sequence in Virtual Reality. Human Behavior and Emerging Technologies, 2023, 1–15. https://doi.org/10.1155/2023/8417012


Halting the Decay of Talk

Klowait, N., & Erofeeva, M. (2023). Halting the Decay of Talk. Social Interaction. Video-Based Studies of Human Sociality, 6(1). https://doi.org/10.7146/si.v6i1.136903


Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue

Groß, A., Singh, A., Banh, N. C., Richter, B., Scharlau, I., Rohlfing, K. J., & Wrede, B. (2023). Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue. Frontiers in Robotics and AI, 10. https://doi.org/10.3389/frobt.2023.1236184


Incremental permutation feature importance (iPFI): towards online explanations on data streams

Fumagalli, F., Muschalik, M., Hüllermeier, E., & Hammer, B. (2023). Incremental permutation feature importance (iPFI): towards online explanations on data streams. Machine Learning, 112(12), 4863–4903. https://doi.org/10.1007/s10994-023-06385-y


“I do not know! but why?” — Local model-agnostic example-based explanations of reject

Artelt, A., Visser, R., & Hammer, B. (2023). “I do not know! but why?” — Local model-agnostic example-based explanations of reject. Neurocomputing, 558, Article 126722. https://doi.org/10.1016/j.neucom.2023.126722


Technology and Civic Virtue

Reijers, W. (2023). Technology and Civic Virtue. Philosophy & Technology, 36(4), Article 71. https://doi.org/10.1007/s13347-023-00669-w


2022

What is Missing in XAI So Far?

Schmid, U., & Wrede, B. (2022). What is Missing in XAI So Far? KI - Künstliche Intelligenz, 36(3–4), 303–315. https://doi.org/10.1007/s13218-022-00786-2


Explainable AI

Schmid, U., & Wrede, B. (2022). Explainable AI. KI - Künstliche Intelligenz, 36(3–4), 207–210. https://doi.org/10.1007/s13218-022-00788-0


AI: Back to the Roots?

Wrede, B. (2022). AI: Back to the Roots? KI - Künstliche Intelligenz, 36(2), 117–120. https://doi.org/10.1007/s13218-022-00773-7


Which “motionese” parameters change with children's age? Disentangling attention-getting from action-structuring modifications

Rohlfing, K., Vollmer, A.-L., Fritsch, J., & Wrede, B. (2022). Which “motionese” parameters change with children’s age? Disentangling attention-getting from action-structuring modifications. Frontiers in Communication, 7. https://doi.org/10.3389/fcomm.2022.922405


Agnostic Explanation of Model Change based on Feature Importance

Muschalik, M., Fumagalli, F., Hammer, B., & Huellermeier, E. (2022). Agnostic Explanation of Model Change based on Feature Importance. KI - Künstliche Intelligenz, 36(3–4), 211–224. https://doi.org/10.1007/s13218-022-00766-6


Modeling Feedback in Interaction With Conversational Agents—A Review

Axelsson, A., Buschmeier, H., & Skantze, G. (2022). Modeling Feedback in Interaction With Conversational Agents—A Review. Frontiers in Computer Science, 4. https://doi.org/10.3389/fcomp.2022.744574


Exploring monological and dialogical phases in naturally occurring explanations

Fisher, J. B., Lohmer, V., Kern, F., Barthlen, W., Gaus, S., & Rohlfing, K. (2022). Exploring monological and dialogical phases in naturally occurring explanations. KI - Künstliche Intelligenz, 36(3–4), 317–326. https://doi.org/10.1007/s13218-022-00787-1


2021

Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems

Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier, H., Esposito, E., Grimminger, A., Hammer, B., Haeb-Umbach, R., Horwath, I., Hüllermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A.-C., Schulte, C., Wachsmuth, H., Wagner, P., & Wrede, B. (2021). Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. IEEE Transactions on Cognitive and Developmental Systems, 13(3), 717–728. https://doi.org/10.1109/tcds.2020.3044366


2020

Understanding and Explaining Digital Artefacts - the Role of a Duality (Accepted Paper - Digital Publication Follows)

Budde, L., Schulte, C., Buhl, H. M., & Muehling, A. (2020). Understanding and Explaining Digital Artefacts - the Role of a Duality (Accepted Paper - Digital Publication Follows). Seventh International Conference on Learning and Teaching in Computing and Engineeringe.


Show all publications

Con­fer­ences

2026

Social Context in Human-AI Interaction (HAI): A Theoretical Framework Based on Multi-Perspectival Imaginaries

Menne, A. L., & Schulz, C. (n.d.). Social Context in Human-AI Interaction (HAI): A Theoretical Framework Based on Multi-Perspectival Imaginaries. In C. Stephanidis, M. Antona, S. Ntoa, & G. Salvendy (Eds.), HCI International 2026 Posters: 28th International Conference on Human-Computer Interaction, HCI 2026, Montreal, Canada, July 26-31, 2026, Proceedings. Springer International Publishing.


2025

Speech Synthesis along Perceptual Voice Quality Dimensions

Rautenberg, F., Kuhlmann, M., Seebauer, F., Wiechmann, J., Wagner, P., & Haeb-Umbach, R. (2025). Speech Synthesis along Perceptual Voice Quality Dimensions. ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India . https://doi.org/10.1109/icassp49660.2025.10888012


Synthesizing Speech with Selected Perceptual Voice Qualities – A Case Study with Creaky Voice

Rautenberg, F., Seebauer, F., Wiechmann, J., Kuhlmann, M., Wagner, P., & Haeb-Umbach, R. (2025). Synthesizing Speech with Selected Perceptual Voice Qualities – A Case Study with Creaky Voice. Interspeech 2025. Interspeech, Rotterdam. https://doi.org/10.21437/Interspeech.2025-1443


Investigating the Impact of Conceptual Metaphors on LLM-based NLI through Shapley Interactions

Sengupta, M., Muschalik, M., Fumagalli, F., Hammer, B., Hüllermeier, E., Ghosh, D., & Wachsmuth, H. (2025). Investigating the Impact of Conceptual Metaphors on LLM-based NLI through Shapley Interactions. Accepted in Findings . Empirical Methods in Natural Language Processing (EMNLP 2025).


Exact Computation of Any-Order Shapley Interactions for Graph Neural Networks

Muschalik, M., Fumagalli, F., Frazzetto, P., Strotherm, J., Hermes, L., Sperduti, A., Hüllermeier, E., & Hammer, B. (2025). Exact Computation of Any-Order Shapley Interactions for Graph Neural Networks. The Thirteenth International Conference on Learning Representations (ICLR).


Explaining Outliers using Isolation Forest and Shapley Interactions

Visser, R., Fumagalli, F., Hüllermeier, E., & Hammer, B. (2025). Explaining Outliers using Isolation Forest and Shapley Interactions. Proceedings of the European Symposium on Artificial Neural Networks (ESANN).


Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory

Fumagalli, F., Muschalik, M., Hüllermeier, E., Hammer, B., & Herbinger, J. (2025). Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics (AISTATS), 258, 5140–5148.


Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

Fichtel, L., Spliethöver, M., Hüllermeier, E., Jimenez, P., Klowait, N., Kopp, S., Ngonga Ngomo, A.-C., Robrecht, A., Scharlau, I., Terfloth, L., Vollmer, A.-L., & Wachsmuth, H. (n.d.). Investigating Co-Constructive Behavior of Large Language Models in  Explanation Dialogues. Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Annual Meeting of the Special Interest Group on Discourse and Dialogue.


Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection

Spliethöver, M., Knebler, T., Fumagalli, F., Muschalik, M., Hammer, B., Hüllermeier, E., & Wachsmuth, H. (2025). Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection. In L. Chiruzzo, A. Ritter, & L. Wang (Eds.), Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (pp. 2421–2449). Association for Computational Linguistics.


“I'm Actually More Interested in AI Than in Computer Science” - 12-Year-Olds Describing Their First Encounter with AI

Lenke, M., Lehner, L., & Landman, M. (2025). “I’m Actually More Interested in AI Than in Computer Science” - 12-Year-Olds Describing Their First Encounter with AI. 2025 IEEE Global Engineering Education Conference (EDUCON). https://doi.org/10.1109/educon62633.2025.11016657


Enhancing AI Interaction through Co-Construction: A Multi-Faceted Workshop Framework

Lenke, M., & Schulte, C. (2025). Enhancing AI Interaction through Co-Construction: A Multi-Faceted Workshop Framework. 2025 IEEE Global Engineering Education Conference (EDUCON). https://doi.org/10.1109/educon62633.2025.11016326


Implementing a computational cognitive process model of medical diagnostic reasoning

Battefeld, D., & Kopp, S. (2025). Implementing a computational cognitive process model of medical diagnostic reasoning. Proceedings of KogWis 2025: Conference of the German Cognitive Science Society. Flexible Minds: Situated and Comparative Perspectives (KogWis 2025), Bochum, Germany.


Manners Matter: Action history guides attention and repair choices during interaction

Singh, A., & Rohlfing, K. J. (2025). Manners Matter: Action history guides attention and repair choices during interaction. IEEE International Conference on Development and Learning (ICDL). IEEE International Conference on Development and Learning (ICDL), Prague. https://doi.org/10.31234/osf.io/yn2we_v1


Embedding Psycholinguistics: An Interactive Framework for Studying Language in Action

Singh, A., & Rohlfing, K. J. (2025). Embedding Psycholinguistics: An Interactive Framework for Studying Language in Action. 6th Biannual Conference of the German Society for Cognitive Science, Bochum, Germany. 6th Biannual Conference of the German Society for Cognitive Science, Bochum, Germany, Bochum. https://doi.org/10.17605/OSF.IO/8PR23


2024

Human Emotions in AI Explanations

Thommes, K., Lammert, O., Schütze, C., Richter, B., & Wrede, B. (2024). Human Emotions in AI Explanations. Communications in Computer and Information Science. https://doi.org/10.1007/978-3-031-63803-9_15


Analyzing the Use of Metaphors in News Editorials for Political Framing

Sengupta, M., El Baff, R., Alshomary, M., & Wachsmuth, H. (2024). Analyzing the Use of Metaphors in News Editorials for Political Framing. In K. Duh, H. Gomez, & S. Bethard (Eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (pp. 3621–3631). Association for Computational Linguistics.


A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support

Liedeker, F., Sanchez-Graillet, O., Seidler, M., Brandt, C., Wellmer, J., & Cimiano, P. (n.d.). A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support. First Workshop on Natural Language Argument-Based Explanations, Santiago de Compostela, Spain.


An Empirical Investigation of Users' Assessment of XAI Explanations: Identifying the Sweet-Spot of Explanation Complexity

Liedeker, F., Düsing, C., Nieveler, M., & Cimiano, P. (2024). An Empirical Investigation of Users’ Assessment of XAI Explanations: Identifying the Sweet-Spot of Explanation Complexity. 2nd World Conference on eXplainable Artificial Intelligence, Valetta, Malta.


ASCODI: An XAI-based interactive reasoning support system for justifiable medical diagnosing

Battefeld, D., Liedeker, F., Cimiano, P., & Kopp, S. (2024). ASCODI: An XAI-based interactive reasoning support system for justifiable medical diagnosing. Proceedings of the 1st Workshop on Multimodal, Affective and Interactive EXplainable AI (MAI-XAI). European Conference on Artificial Intelligence (ECAI), Santiago de Compostela, Spain.


Coupling of Task and Partner Model: Investigating the Intra-Individual Variability in Gaze during Human–Robot Explanatory Dialogue

Singh, A., & Rohlfing, K. J. (2024). Coupling of Task and Partner Model: Investigating the Intra-Individual Variability in Gaze during Human–Robot Explanatory Dialogue. Proceedings of 26th ACM International Conference on Multimodal Interaction (ICMI 2024). 26th ACM International Conference on Multimodal Interaction (ICMI 2024), San Jose, Costa Rica. https://doi.org/10.1145/3686215.3689202


Static Socio-demographic and Individual Factors for Generating Explanations in XAI: Can they serve as a prior in DSS for adaptation of explanation strategies?

Schütze, C., Richter, B., Lammert, O., Thommes, K., & Wrede, B. (2024). Static Socio-demographic and Individual Factors for Generating Explanations in XAI: Can they serve as a prior in DSS for adaptation of explanation strategies? HAI ’24: Proceedings of the 12th International Conference on Human-Agent Interaction, 141–149. https://doi.org/10.1145/3687272.3688300


Human Emotions in AI Explanations

Thommes, K., Lammert, O., Schütze, C., Richter, B., & Wrede, B. (2024). Human Emotions in AI Explanations.


Variations in explainers’ gesture deixis in explanations related to the monitoring of explainees’ understanding

Lazarov, S. T., & Grimminger, A. (2024). Variations in explainers’ gesture deixis in explanations related to the monitoring of explainees’ understanding. Proceedings of the Annual Meeting of the Cognitive Science Society, 46.


Perception and Consideration of the Explainees’ Needs for Satisfying Explanations

Schaffer, M. E., Terfloth, L., Schulte, C., & Buhl, H. M. (2024). Perception and Consideration of the Explainees’ Needs for Satisfying Explanations. 2nd World Conference on eXplainable Artificial Intelligence, Valletta, Malta.


Explainers’ Mental Representations of Explainees’ Needs in Everyday Explanations

Schaffer, M. E., Terfloth, L., Schulte, C., & Buhl, H. M. (2024). Explainers’ Mental Representations of Explainees’ Needs in Everyday Explanations. Joint Proceedings of the XAI-2024 Late-Breaking Work, Demos and Doctoral Consortium. 3793.


Human-AI Co-Construction of Interpretable Predictive Models: The Case of Scoring Systems

Heid, S., Kornowicz, J., Hanselle, J. M., Hüllermeier, E., & Thommes, K. (2024). Human-AI Co-Construction of Interpretable Predictive Models: The Case of Scoring Systems. PROCEEDINGS 34. WORKSHOP COMPUTATIONAL INTELLIGENCE, 21, 233.


Modeling the Quality of Dialogical Explanations

Alshomary, M., Lange, F., Booshehri, M., Sengupta, M., Cimiano, P., & Wachsmuth, H. (2024). Modeling the Quality of Dialogical Explanations. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 11523–11536). ELRA and ICCL.


Towards a Computational Architecture for Co-Constructive Explainable Systems

Buschmeier, H., Cimiano, P., Kopp, S., Kornowicz, J., Lammert, O., Matarese, M., Mindlin, D., Robrecht, A. S., Vollmer, A.-L., Wagner, P., Wrede, B., & Booshehri, M. (2024). Towards a Computational Architecture for Co-Constructive Explainable Systems. Proceedings of the 2024 Workshop on Explainability Engineering, 20–25. https://doi.org/10.1145/3648505.3648509


Predictability of understanding in explanatory interactions based on multimodal cues

Türk, O., Lazarov, S. T., Wang, Y., Buschmeier, H., Grimminger, A., & Wagner, P. (2024). Predictability of understanding in explanatory interactions based on multimodal cues. Proceedings of the 26th ACM International Conference on Multimodal Interaction, 449–458. https://doi.org/10.1145/3678957.3685741


Dung's Argumentation Framework: Unveiling the Expressive Power with Inconsistent Databases

Mahmood, Y., Hecher, M., & Ngonga Ngomo, A.-C. (2024). Dung’s Argumentation Framework: Unveiling the Expressive Power with  Inconsistent Databases. https://doi.org/10.1609/AAAI.V39I14.33651


The illusion of competence: Evaluating the effect of explanations on users’ mental models of visual question answering systems

Sieker, J., Junker, S., Utescher, R., Attari, N., Wersing, H., Buschmeier, H., & Zarrieß, S. (2024). The illusion of competence: Evaluating the effect of explanations on users’ mental models of visual question answering systems. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 19459–19475. https://doi.org/10.18653/v1/2024.emnlp-main.1084


Quantitative Claim-Centric Reasoning in Logic-Based Argumentation

Hecher, M., Mahmood, Y., Meier, A., & Schmidt, J. (2024). Quantitative Claim-Centric Reasoning in Logic-Based Argumentation. Proceedings of the Thirty-ThirdInternational Joint Conference on Artificial Intelligence. https://doi.org/10.24963/ijcai.2024/377


Towards a BFO-based ontology of understanding in explanatory interactions

Booshehri, M., Buschmeier, H., & Cimiano, P. (2024). Towards a BFO-based ontology of understanding in explanatory interactions. Proceedings of the 4th International Workshop on Data Meets Applied Ontologies in Explainable AI (DAO-XAI). 4th International Workshop on Data Meets Applied Ontologies in Explainable AI (DAO-XAI), Santiago de Compostela, Spain.


Detecting subtle differences between human and model languages using spectrum of relative likelihood

Xu, Y., Wang, Y., An, H., Liu, Z., & Li, Y. (2024). Detecting subtle differences between human and model languages using spectrum of relative likelihood. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 10108–10121. https://doi.org/10.18653/v1/2024.emnlp-main.564


A model of factors contributing to the success of dialogical explanations

Booshehri, M., Buschmeier, H., & Cimiano, P. (2024). A model of factors contributing to the success of dialogical explanations. Proceedings of the 26th ACM International Conference on Multimodal Interaction, 373–381. https://doi.org/10.1145/3678957.3685744


Conversational feedback in scripted versus spontaneous dialogues: A comparative analysis

Pilán, I., Prévot, L., Buschmeier, H., & Lison, P. (2024). Conversational feedback in scripted versus spontaneous dialogues: A comparative analysis. Proceedings of the 25th Meeting of the Special Interest Group on Discourse and Dialogue, 440–457. https://doi.org/10.18653/v1/2024.sigdial-1.38


Turn-taking dynamics across different phases of explanatory dialogues

Wagner, P., Włodarczak, M., Buschmeier, H., Türk, O., & Gilmartin, E. (2024). Turn-taking dynamics across different phases of explanatory dialogues. Proceedings of the 28th Workshop on the Semantics and Pragmatics of Dialogue, 6–14.


No learning rates needed: Introducing SALSA - Stable Armijo Line Search Adaptation

Kenneweg, P., Kenneweg, T., Fumagalli, F., & Hammer, B. (2024). No learning rates needed: Introducing SALSA - Stable Armijo Line Search Adaptation. 2024 International Joint Conference on Neural Networks (IJCNN), 1–8. https://doi.org/10.1109/IJCNN60899.2024.10650124


Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles

Muschalik, M., Fumagalli, F., Hammer, B., & Huellermeier, E. (2024). Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 38(13), 14388–14396. https://doi.org/10.1609/aaai.v38i13.29352


SVARM-IQ: Efficient Approximation of Any-order Shapley Interactions through Stratification

Kolpaczki, P., Muschalik, M., Fumagalli, F., Hammer, B., & Huellermeier, E. (2024). SVARM-IQ: Efficient Approximation of Any-order Shapley Interactions through Stratification. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics (AISTATS), 238, 3520–3528.


KernelSHAP-IQ: Weighted Least Square Optimization for Shapley Interactions

Fumagalli, F., Muschalik, M., Kolpaczki, P., Hüllermeier, E., & Hammer, B. (2024). KernelSHAP-IQ: Weighted Least Square Optimization for Shapley Interactions. Proceedings of the 41st International Conference on Machine Learning (ICML), 235, 14308–14342.


shapiq: Shapley interactions for machine learning

Muschalik, M., Baniecki, H., Fumagalli, F., Kolpaczki, P., Hammer, B., & Huellermeier, E. (2024). shapiq: Shapley interactions for machine learning. Advances in Neural Information Processing Systems (NeurIPS), 37, 130324–130357.


Approximating the shapley value without marginal contributions

Kolpaczki, P., Bengs, V., Muschalik, M., & Hüllermeier, E. (2024). Approximating the shapley value without marginal contributions. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 38(12), 13246–13255.


Revisiting the phenomenon of syntactic complexity convergence on German dialogue data

Wang, Y., & Buschmeier, H. (2024). Revisiting the phenomenon of syntactic complexity convergence on German dialogue data. Proceedings of the 20th Conference on Natural Language Processing (KONVENS 2024), 75–80.


How much does nonverbal communication conform to entropy rate constancy?: A case study on listener gaze in interaction

Wang, Y., Xu, Y., Skantze, G., & Buschmeier, H. (2024). How much does nonverbal communication conform to entropy rate constancy?: A case study on listener gaze in interaction. Findings of the Association for Computational Linguistics ACL 2024, 3533–3545.


Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness

Spliethöver, M., Menon, S. N., & Wachsmuth, H. (2024). Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness. In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), Findings of the Association for Computational Linguistics: ACL 2024 (pp. 9294–9313). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-acl.553


Revealing the Dynamics of Medical Diagnostic Reasoning as Step-by-Step Cognitive Process Trajectories

Battefeld, D., Mues, S., Wehner, T., House, P., Kellinghaus, C., Wellmer, J., & Kopp, S. (2024). Revealing the Dynamics of Medical Diagnostic Reasoning as Step-by-Step Cognitive Process Trajectories. Proceedings of the 46th Annual Conference of the Cognitive Science Society. The Annual Meeting of the Cognitive Science Society, Rotterdam, NL.


2023

On Feature Importance and Interpretability of Speaker Representations

Rautenberg, F., Kuhlmann, M., Wiechmann, J., Seebauer, F., Wagner, P., & Haeb-Umbach, R. (2023). On Feature Importance and Interpretability of Speaker Representations. ITG Conference on Speech Communication. ITG Conference on Speech Communication, Aachen.


Explaining voice characteristics to novice voice practitioners-How successful is it?

Wiechmann, J., Rautenberg, F., Wagner, P., & Haeb-Umbach, R. (2023). Explaining voice characteristics to novice voice practitioners-How successful is it? 20th International Congress of the Phonetic Sciences (ICPhS) .


The Role of Response Time for Algorithm Aversion in Fast and Slow Thinking Tasks

Lebedeva, A., Kornowicz, J., Lammert, O., & Papenkordt, J. (2023). The Role of Response Time for Algorithm Aversion in Fast and Slow Thinking Tasks. Artificial Intelligence in HCI. https://doi.org/10.1007/978-3-031-35891-3_9


The Importance of Distrust in AI

Peters, T. M., & Visser, R. W. (2023). The Importance of Distrust in AI. Communications in Computer and Information Science. https://doi.org/10.1007/978-3-031-44070-0_15


Re-examining the quality dimensions of synthetic speech

Seebauer, F., Kuhlmann, M., Haeb-Umbach, R., & Wagner, P. (2023). Re-examining the quality dimensions of synthetic speech. 12th Speech Synthesis Workshop (SSW) 2023.


What is AI Ethics? Ethics as means of self-regulation and the need for critical reflection

Alpsancar, S. (2023). What is AI Ethics? Ethics as means of self-regulation and the need for critical reflection . International Conference on Computer Ethics 2023, 1(1), 1--17.


Technical Transparency for Robot Navigation Through AR Visualizations

Dyck, L., Beierling, H., Helmert, R., & Vollmer, A.-L. (2023). Technical Transparency for Robot Navigation Through AR Visualizations. Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 720–724. https://doi.org/10.1145/3568294.3580181


Speech Disentanglement for Analysis and Modification of Acoustic and Perceptual Speaker Characteristics

Rautenberg, F., Kuhlmann, M., Ebbers, J., Wiechmann, J., Seebauer, F., Wagner, P., & Haeb-Umbach, R. (2023). Speech Disentanglement for Analysis and Modification of Acoustic and Perceptual Speaker Characteristics. Fortschritte Der Akustik - DAGA 2023, 1409–1412.


Emotional Debiasing Explanations for Decisions in HCI

Schütze, C., Lammert, O., Richter, B., Thommes, K., & Wrede, B. (2023). Emotional Debiasing Explanations for Decisions in HCI. Artificial Intelligence in HCI. https://doi.org/10.1007/978-3-031-35891-3_20


SNAPE: A Sequential Non-Stationary Decision Process Model for Adaptive Explanation Generation

Robrecht, A., & Kopp, S. (2023). SNAPE: A Sequential Non-Stationary Decision Process Model for Adaptive Explanation Generation. Proceedings of the 15th International Conference on Agents and Artificial Intelligence. https://doi.org/10.5220/0011671300003393


A Study on the Benefits and Drawbacks of Adaptivity in AI-generated Explanations

Robrecht, A., Rothgänger, M., & Kopp, S. (2023). A Study on the Benefits and Drawbacks of Adaptivity in AI-generated Explanations. Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents. https://doi.org/10.1145/3570945.3607339


Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms

Sengupta, M., Alshomary, M., Scharlau, I., & Wachsmuth, H. (2023). Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms. In H. Bouamor, J. Pino, & K. Bali (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 4636–4659). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.findings-emnlp.308


The Return of Black Box Theory in Explainable AI

Beer, F., & Schulz, C. (n.d.). The Return of Black Box Theory in Explainable AI. 4S Conference (Society for the Social Studies of Science), Honolulu/Hawaii, November 9.


Vernacular Metaphors of AI

Schulz, C., & Wilmes , A. (n.d.). Vernacular Metaphors of AI .


Contrastiveness in the context of action demonstration: an eye-tracking study on its effects on action perception and action recall

Singh, A., & Rohlfing, K. J. (2023). Contrastiveness in the context of action demonstration: an eye-tracking study on its effects on action perception and action recall. Proceedings of the Annual Meeting of the Cognitive Science Society 45 (45). 45th Annual Conference of the Cognitive Science Society, Sydney.


A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations

Liedeker, F., & Cimiano, P. (2023). A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations. xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lissabon.


SNAPE: A Sequential Non-Stationary Decision Process Model for Adaptive Explanation Generation

Robrecht, A., & Kopp, S. (2023). SNAPE: A Sequential Non-Stationary Decision Process Model for Adaptive Explanation Generation. Proceedings of the 15th International Conference on Agents and Artificial Intelligence, 48–58. https://doi.org/10.5220/0011671300003393


Exploring the Semantic Dialogue Patterns of Explanations – a Case Study of Game Explanations

Fisher, J. B., Robrecht, A., Kopp, S., & Rohlfing, K. J. (2023). Exploring the Semantic Dialogue Patterns of Explanations – a Case Study of Game Explanations. Proceedings of the 27th Workshop on the Semantics and Pragmatics of Dialogue . Semdial, Maribor.


Comparing Humans and Algorithms in Feature Ranking: A Case-Study in the Medical Domain

Hanselle, J. M., Kornowicz, J., Heid, S., Thommes, K., & Hüllermeier, E. (2023). Comparing Humans and Algorithms in Feature Ranking: A Case-Study in the Medical Domain. In M. Leyer & J. Wichmann (Eds.), LWDA’23: Learning, Knowledge, Data, Analysis. .


Conclusion-based Counter-Argument Generation

Alshomary, M., & Wachsmuth, H. (2023). Conclusion-based Counter-Argument Generation. In A. Vlachos & I. Augenstein (Eds.), Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (pp. 957–967). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.eacl-main.67


Adding Why to What? Analyses of an Everyday Explanation

Terfloth, L., Schaffer, M., Buhl, H. M., & Schulte, C. (2023). Adding Why to What? Analyses of an Everyday Explanation. 1st World Conference on eXplainable Artificial Intelligence (xAI 2023), Lisboa. https://doi.org/10.1007/978-3-031-44070-0_13


Does listener gaze in face-to-face interaction follow the Entropy Rate Constancy principle: An empirical study

Wang, Y., & Buschmeier, H. (2023). Does listener gaze in face-to-face interaction follow the Entropy Rate Constancy principle: An empirical study. Findings of the Association for Computational Linguistics: EMNLP 2023, 15372–15379.


iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios

Muschalik, M., Fumagalli, F., Jagtani, R., Hammer, B., & Huellermeier, E. (2023). iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios. Proceedings of the World Conference on Explainable Artificial Intelligence (XAI). https://doi.org/10.1007/978-3-031-44064-9_11


On Feature Removal for Explainability in Dynamic Environments

Fumagalli, F., Muschalik, M., Hüllermeier, E., & Hammer, B. (2023). On Feature Removal for Explainability in Dynamic Environments. Proceedings of the European Symposium on Artificial Neural Networks (ESANN). ESANN 2023 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges (Belgium) and online. https://doi.org/10.14428/ESANN/2023.ES2023-148


SHAP-IQ: Unified Approximation of any-order Shapley Interactions

Fumagalli, F., Muschalik, M., Kolpaczki, P., Hüllermeier, E., & Hammer, B. (2023). SHAP-IQ: Unified Approximation of any-order Shapley Interactions. Advances in Neural Information Processing Systems (NeurIPS), 36, 11515--11551.


2022

Technically enabled explaining of voice characteristics

Wiechmann, J., Glarner, T., Rautenberg, F., Wagner, P., & Haeb-Umbach, R. (2022). Technically enabled explaining of voice characteristics. 18. Phonetik Und Phonologie Im Deutschsprachigen Raum (P&P).


An Architecture Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various Robotic Systems

Groß, A., Schütze, C., Wrede, B., & Richter, B. (2022). An Architecture Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various Robotic Systems. INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 154–159. https://doi.org/10.1145/3536220.3558070


Enabling Non-Technical Domain Experts to Create Robot-Assisted Therapeutic Scenarios via Visual Programming

Schütze, C., Groß, A., Wrede, B., & Richter, B. (2022). Enabling Non-Technical Domain Experts to Create Robot-Assisted Therapeutic Scenarios via Visual Programming. INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 166–170. https://doi.org/10.1145/3536220.3558072


User Involvement in Training Smart Home Agents

Sieger, L. N., Hermann, J., Schomäcker, A., Heindorf, S., Meske, C., Hey, C.-C., & Doğangün, A. (2022). User Involvement in Training Smart Home Agents. International Conference on Human-Agent Interaction. HAI ’22: International Conference on Human-Agent Interaction, Christchurch, New Zealand. https://doi.org/10.1145/3527188.3561914


(De)Coding social practice in the field of XAI: Towards a co-constructive framework of explanations and understanding between lay users and algorithmic systems

Finke, J., Horwath, I., Matzner, T., & Schulz, C. (2022). (De)Coding social practice in the field of XAI: Towards a co-constructive framework of explanations and understanding between lay users and algorithmic systems. Artificial Intelligence in HCI, 149–160. https://doi.org/10.1007/978-3-031-05643-7_10


“Mama Always Had a Way of Explaining Things So I Could Understand”: A Dialogue Corpus for Learning to Construct Explanations

Wachsmuth, H., & Alshomary, M. (2022). “Mama Always Had a Way of Explaining Things So I Could Understand”: A Dialogue Corpus for Learning to Construct Explanations. In N. Calzolari, C.-R. Huang, H. Kim, J. Pustejovsky, L. Wanner, K.-S. Choi, P.-M. Ryu, H.-H. Chen, L. Donatelli, H. Ji, S. Kurohashi, P. Paggio, N. Xue, S. Kim, Y. Hahm, Z. He, T. K. Lee, E. Santus, F. Bond, & S.-H. Na (Eds.), Proceedings of the 29th International Conference on Computational Linguistics (pp. 344–354). International Committee on Computational Linguistics.


Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning

Sengupta, M., Alshomary, M., & Wachsmuth, H. (2022). Back to the Roots: Predicting the Source Domain of Metaphors using Contrastive Learning. Proceedings of the 2022 Workshop on Figurative Language Processing.


(De)Coding Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems

Finke, J., Horwath, I., Matzner, T., & Schulz, C. (2022). (De)Coding Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems. Artificial Intelligence in HCI, 149–160. https://doi.org/10.1007/978-3-031-05643-7_10


Formalizing cognitive biases in medical diagnostic reasoning

Battefeld, D., & Kopp, S. (2022). Formalizing cognitive biases in medical diagnostic reasoning. Proceedings of the 8th Workshop on Formal and Cognitive Reasoning. 8th Workshop on Formal and Cognitive Reasoning (FCR), Trier.


Generating Contrastive Snippets for Argument Search

Alshomary, M., Rieskamp, J., & Wachsmuth, H. (2022). Generating Contrastive Snippets for Argument Search. Proceedings of the 9th International Conference on Computational Models of Argument, 21–31. http://dx.doi.org/10.3233/FAIA220138


The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments

Alshomary, M., El Baff, R., Gurcke, T., & Wachsmuth, H. (2022). The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 8782–8797.


Explaining Reject Options of Learning Vector Quantization Classifiers

Artelt, A., Brinkrolf, J., Visser, R., & Hammer, B. (2022). Explaining Reject Options of Learning Vector Quantization Classifiers. Proceedings of the 14th International Joint Conference on Computational Intelligence. https://doi.org/10.5220/0011389600003332


Model Agnostic Local Explanations of Reject

Artelt, A., Visser, R., & Hammer, B. (2022). Model Agnostic Local Explanations of Reject. ESANN 2022 Proceedings. https://doi.org/10.14428/esann/2022.es2022-34


Show all publications

Con­fer­ence Ab­stracts

2025

Die Ambivalenz von Sichtbarkeit. Ethische Perspektiven auf die digitale Transformation

Philippi, M. (2025). Die Ambivalenz von Sichtbarkeit. Ethische Perspektiven auf die digitale Transformation. Sorbische Lebenswelten im digitalen Zeitalter, BTU Cottbus-Senftenberg, Cottbus.


Grenzen des Verstehens

Philippi, M. (2025). Grenzen des Verstehens. Hermeneutik - oder: Was heißt “Verstehen”? Januartagung der Evangelischen Forschungsakademie, Berlin.


Acoustic detection of false positive backchannels of understanding in explanations

Türk, O., Lazarov, S. T., Buschmeier, H., Wagner, P., & Grimminger, A. (2025). Acoustic detection of false positive backchannels of understanding in explanations. LingCologne 2025 – Book of Abstracts, 36.


A BFO-based ontology of context for Social XAI

Booshehri, M., Buschmeier, H., & Cimiano, P. (2025). A BFO-based ontology of context for Social XAI. Abstracts of the 3rd TRR 318 Conference: Contextualizing Explanations. 3rd TRR 318 Conference: Contextualizing Explanations, Bielefeld, Germany.


The Dual Nature as a Local Context to Explore Verbal Behaviour in Game Explanations

Fisher, J. B., & Terfloth, L. (2025). The Dual Nature as a Local Context to Explore Verbal Behaviour in Game Explanations. Proceedings of the 29th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2025).


2024

Effects of task difficulty on visual processing speed

Banh, N. C., & Scharlau, I. (2024). Effects of task difficulty on visual processing speed. Tagung experimentell arbeitender Psycholog:innen (TeaP), Regensburg.


AI explainability, temporality, and civic virtue

Reijers, W., Matzner, T., Alpsancar, S., & Philippi, M. (2024). AI explainability, temporality, and civic virtue. Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT. Universidad de La Rioja, 2024.


Unpacking the purposes of explainable AI

Alpsancar, S., Matzner, T., & Philippi, M. (2024). Unpacking the purposes of explainable AI. Smart Ethics in the Digital World: Proceedings of the ETHICOMP 2024. 21th International Conference on the Ethical and Social Impacts of ICT, 31–35.


The mental representation of the object of explanation in the process of co-constructive explanations

Schaffer, M., & Buhl, H. M. (2024). The mental representation of the object of explanation in the process of co-constructive explanations. In U. Ansorge, B. Szaszkó, & L. Werner (Eds.), 53rd DGPs Congress - Abstracts.


Von der Kunst, die richtigen Fragen zu stellen. Das Potential der Phänomenologie für die Technikfolgenabschätzung

Philippi, M. (2024). Von der Kunst, die richtigen Fragen zu stellen. Das Potential der Phänomenologie für die Technikfolgenabschätzung. TA24. TA24: Methoden für die Technikfolgenabschätzung – im Spannungsfeld zwischen bewährter Praxis und neuen Möglichkeiten, ÖAW Wien.


Dealing responsibly with tacit assumptions. An interdisciplinary approach to the integration of ethical reflexion into user modeling

Philippi, M., & Mindlin, D. (2024). Dealing responsibly with tacit assumptions. An interdisciplinary approach to the integration of ethical reflexion into user modeling. EASST-4S. EASST-4S 2024: Making and doing transformations, VU Amsterdam.


How to address ethical problems in a multi-perspective context: Interdisciplinary challenges of XAI

Philippi, M. (2024). How to address ethical problems in a multi-perspective context: Interdisciplinary challenges of XAI. Fpet2024. fpet (Forum on Philosophy, Engineering, and Technology) 2024, ZKM Karlsruhe.


Ethics of Explainable AI

Philippi, M., & Reijers, W. (2024). Ethics of Explainable AI. Ethics and Normativity of Explainable AI, Paderborn.


Dual-use potential in humanitarian UAVs

Philippi, M. (2024). Dual-use potential in humanitarian UAVs. Human-machine learning? Interaction with deadly machines, Paderborn.


Herausforderungen und Potentiale von erklärbarer KI für Technikfolgenabschätzung und Politikberatung

Philippi, M. (2024). Herausforderungen und Potentiale von erklärbarer KI für Technikfolgenabschätzung und Politikberatung. NTA11. NTA11: Politikberatungskompetenzen heute, Berlin.


Transparency and persuasion: Chances and Risks of Explainable AI applications in modeling for policy

Philippi, M. (2024). Transparency and persuasion: Chances and Risks of Explainable AI applications in modeling for policy. SAS24. SAS24: Modeling for Policy, HLRS Stuttgart.


Automatic reconstruction of dialogue participants’ coordinating gaze behavior from multiple camera perspectives

Riechmann, A. N., & Buschmeier, H. (2024). Automatic reconstruction of dialogue participants’ coordinating gaze behavior from multiple camera perspectives. Book of Abstracts of the 2nd International Multimodal Communication Symposium, 38–39.


The role of interactive gestures in explanatory interactions

Lohmer, V., & Kern, F. (2024). The role of interactive gestures in explanatory interactions. Second International Multimodal Communication Symposium (MMSYM) - Book of Abstract. 2nd International Multimodal Communication Symposium, Goethe-Universität Frankfurt, Deutschland.


2023

First steps towards real-time assessment of attentional weights and capacity according to TVA

Banh, N. C., & Scharlau, I. (2023). First steps towards real-time assessment of attentional weights and capacity according to TVA. In S. Merz, C. Frings, B. Leuchtenberg, B. Moeller, S. Mueller, R. Neumann, B. Pastötter, L. Pingen, & G. Schui (Eds.), Abstracts of the 65th TeaP. ZPID (Leibniz Institute for Psychology). https://doi.org/10.23668/PSYCHARCHIVES.12945


Dynamic Feature Selection in AI-based Diagnostic Decision Support for Epilepsy

Liedeker, F., & Cimiano, P. (2023). Dynamic Feature Selection in AI-based Diagnostic Decision Support for Epilepsy. 1st International Conference on Artificial Intelligence in Epilepsy and Neurological Disorders, Breckenridge, CO, USA .


Approaches of Assessing Understanding Using Video-Recall Data

Lazarov, S. T., Schaffer, M., & Ronoh, E. K. (2023). Approaches of Assessing Understanding Using Video-Recall Data. 2nd TRR 318 Conference “Measuring Understanding” , Paderborn.


Erklärungsverläufe und -inhalte aus Sicht Erklärender - eine qualitative Studie

Schaffer, M., & Buhl, H. M. (2023). Erklärungsverläufe und -inhalte aus Sicht Erklärender - eine qualitative Studie. PAEPS 19. Fachgruppentagung Pädagogische Psychologie: Lehren und Lernen in einer Welt im Wandel, Kiel.


Trust and awareness in the context of search and rescue missions

Philippi, M. (2023). Trust and awareness in the context of search and rescue missions. SAS23. SAS23: Reliability or Trustworthiness?, HLRS Stuttgart.


Explaining the Technical Artifact Quarto!: How Gestures are used in Everyday Explanations

Lohmer, V., Terfloth, L., & Kern, F. (2023). Explaining the Technical Artifact Quarto!: How Gestures are used in Everyday Explanations. First International Multimodal Communication Symposium - Book of Abstract. 1st International Multimodal Communication Symposium, Universität Pompeu Fabra, Barcelona.


2022

Effects of verbal negation on TVA’s capacity and weight parameters

Banh, N. C., & Scharlau, I. (2022). Effects of verbal negation on TVA’s capacity and weight parameters. In S. Malejka, M. Barth, H. Haider, & C. Stahl (Eds.), TeaP 2022 - Abstracts of the 64th Conference of Experimental Psychologists . Pabst Science Publishers. https://doi.org/10.23668/psycharchives.5677


Folgen wiederholter Negation auf die Aufmerksamkeit

Banh, N. C., Scharlau, I., & Rohlfing, K. J. (2022). Folgen wiederholter Negation auf die Aufmerksamkeit. In C. Bermeitinger & W. Greve (Eds.), 52. Kongress der Deutschen Gesellschaft für Psychologie.


Die Anpassungen von Erklärungen an das Verständnis des Erklärgegenstandes der Gesprächspartner

Schaffer, M., Lea, B., Schulte, C., & Buhl, H. M. (2022). Die Anpassungen von Erklärungen an das Verständnis des Erklärgegenstandes der Gesprächspartner. In C. Bermeitinger & W. Greve (Eds.), 52nd DGPs Congress  - Abstracts.


Show all publications