Measuring Understanding
Welcome to this year’s 2nd TRR Conference on “Measuring Understanding”.
Current XAI research is centring around solutions of how to achieve understanding. The topics include different methods and tools to assess and measure understanding in the context of (a) dyadic everyday explanations (b) the context of interpretability or explainability of AI systems, or (c) of institutional environments. Specifically, the focus of the conference is on the methodological challenge of how to measure and operationalise understanding in diverse explanatory settings including human-human interaction and human-machine interaction. Additionally, what are the implications of these measurements for XAI. Researchers from all around Europe are coming together to discuss recent challenges and topics how to measure understanding.
Research Tracks & Understanding Workshop
Parallel Research Track 1 - Monday
15:45 - 17:15
CA and Ethnomethodology | Evaluating XAI via User Studies | AI for Education and Training |
---|---|---|
Chair: Room: L2.202 | Chair: Room: L2.201 | Chair: Room: L1.201 |
What 'Counts' as Explanation in Social Interaction Speaker: | Evaluating Concept- and Relation-based Explanations for Image Classification Speaker: | From Machine Learning to Machine Teaching: How Human Can Learn form Explainable Artifical Intelligence Speaker: |
Understanding Beyond Measurement Speaker: | Evaluating a Multi-Modal Design for an Informative Take-Over Request in a Drone-Controller Setting Speaker: | AR-mediated Explainability for Teaching and Cooperation Speaker: |
Understanding Robots in Public: The Influence of Other Humans' Presence on Human-Robot Interaction Speaker: | Understanding Path Planning Explanations Speaker: | Collaboration with AI Technologies: AI-Developed Curricula in Language Education Speaker: |
A, B, C, It's Easy as 1, 2, 3 - Inviting Linguistic Complexity to the Process of Operationalizing Understanding Speaker: | What's happening right now? Passenger Understanding of Highly Automated Shuttle's Minimal Risk Maneuvers by Internal Human-Machine Interfaces Speaker: |
Understanding Workshop - Tuesday
11:00 - 12:30
Chair:
Vivien Lohmer
Room: L2.202
Measuring the Progress of Understanding of the Explainee via Substantive Contributions in Explanatory Dialogues Speaker: |
Approaches of Assessing Understanding Using Video-Recall Data Speaker: |
Measuring Intra-Individual Differences in Signals of Understanding Speaker: |
Parallel Research Track 2 - Tuesday
13:45 - 15:15
Developing Instruments and Models for Measuring Understanding | Classification of Understanding and Human Oversight | Psychological and Cognitive Science View on XAI |
---|---|---|
Chair: Room: L2.202 | Chair: Room: L2.201 | Chair: Room: L1.201 |
Unraveling the Relationship Between Explanation as a Process and Understanding. Using the Block Model as Holistic Framework for Understanding Explainable AI Speaker: | Towards a BFO-based Ontology of Understanding Explanations Speaker: | Mental Model Disparity and its Effect on User Understanding and Satisfaction in XAI Speaker: |
Conceptualization of Subjective Understanding towards Scale Development Speaker: | Beyond Understanding. Towards a Comprehensive Measure of Human Oversight in AI Speaker: | A Machine Learning Approach to the Prediction of Individual Differences in Psychological Reactivities Speaker: |
A Communication Architecture for Measuring Understanding Speaker: | Understanding as an Interactive Precondition and a Problem: Securing of Understanding in Calls to the Ministry for State Security of the GDR Speaker: | Do Humans and CNN Better Understand the Visual Explanations Generated by other Humans or XAI Algorithms? Speaker: |
Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change and Meta-Competencies Speaker: | Unfooling SHAP and SAGE: Knockoff Imputation for Shapley Values Speaker: | Shedding Light: A Survey of Concept-Based Explainable AI Speaker: |
Parallel Research Track 3 - Tuesday
15:45 - 16:30
Evaluation of Explanations | On the Interpretation of XAI Results |
---|---|
Chair: Room: L2.202 | Chair: Room: L2.201 |
Modeling the Quality of Dialogical Explanations Speaker: | Pitfalls of Interpreting the Shapley Value in Explainable AI Speaker: |
Compare-xAI: Toward Unifying Functional Testing Methods for Post-hoc XAI Algorithms into a Multi-dimensional Benchmark Speaker: | On the Confounding Roles of Explanation Faithfulness and Intuitivity in Measuring Understanding Speaker: |
General questions go to conference@trr318.uni-paderborn.de,
media enquiries to communication@trr318.uni-paderborn.de.
Further information on the conference