Achtung:

Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

Bildinformationen anzeigen
Bildinformationen anzeigen
Bildinformationen anzeigen
Bildinformationen anzeigen
Bildinformationen anzeigen
Bildinformationen anzeigen

Publikationen

2022

Schütze, C., Groß, A., Wrede, B., & Richter, B. (2022). Enabling Non-Technical Domain Experts to Create Robot-Assisted Therapeutic Scenarios via Visual Programming.

In this paper, we present a visual programming software for enabling non-technical domain experts to create robot-assisted therapy scenarios for multiple robotic platforms. Our new approach is evaluated by comparing it with Choregraphe, the standard visual programming framework for the often used robotics platforms Pepper and NAO. We could show that our approach receives higher usability values and allows users to perform better in some practical tasks, including understanding, changing and creating small robot-assisted therapy scenarios.

Schütze, C., Groß, A., Wrede, B., & Richter, B. (2022). Enabling Non-Technical Domain Experts to Create Robot-Assisted Therapeutic Scenarios via Visual Programming. In R. Tumuluri, N. Sebe, G. Pingali, D. B. Jayagopi, A. Dhall, R. Singh, L. Anthony, et al. (Eds.), ICMI '22 Companion: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (pp. 166-170). New York, NY, USA: ACM. https://doi.org/10.1145/3536220.3558072

Groß A., Schütze C., Wrede B., Richter B. (2022). An Architecture Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various Robotic Systems

In this paper, we present a software-architecture for robot-assisted configurable and autonomous Joint-Attention-Training scenarios to support autism therapy. The focus of the work is the expandability of the architecture for the use of different robots, as well as the maximization of the usability of the interface for the therapeutic user. By evaluating the user-experience, we draw first conclusions about the usability of the system for computer and non-computer scientists. Both groups can solve different tasks without any major issues, and the overall usability of the system was rated as good.

Groß, A., Schütze, C., Wrede, B., & Richter, B. (2022). An Architecture Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various Robotic Systems. In R. Tumuluri, N. Sebe, G. Pingali, D. B. Jayagopi, A. Dhall, R. Singh, L. Anthony, et al. (Eds.), ICMI '22 Companion: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (pp. 154-159). New York, NY, USA: ACM. https://doi.org/10.1145/3536220.3558070

Schulz, C. (2022). A new algorithmic imaginary

The algorithmic imaginary as a theoretical concept has received increasing attention in recent years as it aims at users’ appropriation of algorithmic processes operating in opacity. But the concept originally only starts from the users’ point of view, while the processes on the platforms’ side are largely left out. In contrast, this paper argues that what is true for users is also valid for algorithmic processes and the designers behind. On the one hand, the algorithm imagines users’ future behavior via machine learning, which is supposed to predict all their future actions. On the other hand, the designers anticipate different actions that could potentially performed by users with every new implementation of features such as social media feeds. In order to bring into view this permanently reciprocal interplay coupled to the imaginary, in which not only the users are involved, I will argue for a more comprehensive and theoretically precise algorithmic imaginary referring to the theory of Cornelius Castoriadis. In such a perspective, an important contribution can be formulated for a theory of social media platforms that goes beyond praxeocentrism or structural determinism.

Schulz, C. (2022). A new algorithmic imaginary. Media, Culture & Society. https://doi.org/10.1177/01634437221136014

Rohlfing K. J., Vollmer A.-L., Fritsch J. and Wrede B. (2022). Which “motionese” parameters change with children's age? Disentangling attention-getting from action-structuring modifications

Modified action demonstration—dubbed motionese—has been proposed as a way to help children recognize the structure and meaning of actions. However, until now, it has been investigated only in young infants. This brief research report presents findings from a cross-sectional study of parental action demonstrations to three groups of 8–11, 12–23, and 24–30-month-old children that applied seven motionese parameters; a second study investigated the youngest group of participants longitudinally to corroborate the cross-sectional results. Results of both studies suggested that four motionese parameters (Motion Pauses, Pace, Velocity, Acceleration) seem to structure the action by organizing it in motion pauses. Whereas these parameters persist over different ages, three other parameters (Demonstration Length, Roundness, and Range) occur predominantly in the younger group and seem to serve to organize infants' attention on the basis of movement. Results are discussed in terms of facilitative vs. pedagogical learning.

Rohlfing K. J., Vollmer A.-L., Fritsch J. and Wrede B. (2022). Which “motionese” parameters change with children's age? Disentangling attention-getting from action-structuring modifications. Front. Commun. 7:922405. doi: 10.3389/fcomm.2022.922405

Rohlfing K. J. et al. (2022). Social/dialogical roles of social robots in supporting children’s learning of language and literacy—A review and analysis of innovative roles

One of the many purposes for which social robots are designed is education, and there have been many attempts to systematize their potential in this field. What these attempts have in common is the recognition that learning can be supported in a variety of ways because a learner can be engaged in different activities that foster learning. Up to now, three roles have been proposed when designing these activities for robots: as a teacher or tutor, a learning peer, or a novice. Current research proposes that deciding in favor of one role over another depends on the content or preferred pedagogical form. However, the design of activities changes not only the content of learning, but also the nature of a human–robot social relationship. This is particularly important in language acquisition, which has been recognized as a social endeavor. The following review aims to specify the differences in human–robot social relationships when children learn language through interacting with a social robot. After proposing categories for comparing these different relationships, we review established and more specific, innovative roles that a robot can play in language-learning scenarios. This follows Mead’s (1946) theoretical approach proposing that social roles are performed in interactive acts. These acts are crucial for learning, because not only can they shape the social environment of learning but also engage the learner to different degrees. We specify the degree of engagement by referring to Chi’s (2009) progression of learning activities that range from active, constructive, toward interactive with the latter fostering deeper learning. Taken together, this approach enables us to compare and evaluate different human–robot social relationships that arise when applying a robot in a particular social role.

Rohlfing K. J., Altvater-Mackensen N., Caruana N., van den Berghe R., Bruno B., Tolksdorf N. F. and Hanulíková A. (2022) Social/dialogical roles of social robots in supporting children’s learning of language and literacy—A review and analysis of innovative roles. Front. Robot. AI 9:971749. doi: 10.3389/frobt.2022.971749

Wachsmuth, H., & Alshomary, M. (2022). "Mama Always Had a Way of Explaining Things So I Could Understand": A Dialogue Corpus for Learning to Construct Explanations

As AI is more and more pervasive in everyday life, humans have an increasing demand to understand its behavior and decisions. Most research on explainable AI builds on the premise that there is one ideal explanation to be found. In fact, however, everyday explanations are co-constructed in a dialogue between the person explaining (the explainer) and the specific person being explained to (the explainee). In this paper, we introduce a first corpus of dialogical explanations to enable NLP research on how humans explain as well as on how AI can learn to imitate this process. The corpus consists of 65 transcribed English dialogues from the Wired video series 5 Levels, explaining 13 topics to five explainees of different proficiency. All 1550 dialogue turns have been manually labeled by five independent professionals for the topic discussed as well as for the dialogue act and the explanation move performed. We analyze linguistic patterns of explainers and explainees, and we explore differences across proficiency levels. BERT-based baseline results indicate that sequence information helps predicting topics, acts, and moves effectively.

Wachsmuth, H., & Alshomary, M. (2022). "Mama Always Had a Way of Explaining Things So I Could Understand'': A Dialogue Corpus for Learning to Construct Explanations. Proceedings of the 29th International Conference on Computational Linguistics arXiv. https://doi.org/10.48550/ARXIV.2209.02508

Battefeld, D., & Kopp, S. (2022). Formalizing cognitive biases in medical diagnostic reasoning

This paper presents preliminary work on the formalization of three prominent cognitive biases in the diagnostic reasoning process over epileptic seizures, psychogenic seizures and syncopes. Diagnostic reasoning is understood as iterative exploration of medical evidence. This exploration is represented as a partially observable Markov decision process where the state (i.e., the correct diagnosis) is uncertain. Observation likelihoods and belief updates are computed using a Bayesian network which defines the interrelation between medical risk factors, diagnoses and potential findings. The decision problem is solved via partially observable upper confidence bounds for trees in Monte-Carlo planning. We compute a biased diagnostic exploration policy by altering the generated state transition, observation and reward during look ahead simulations. The resulting diagnostic policies reproduce reasoning errors which have only been described informally in the medical literature. We plan to use this formal representation in the future to inversely detect and classify biased reasoning in actual diagnostic trajectories obtained from physicians.

Battefeld, D., & Kopp, S. (2022). Formalizing cognitive biases in medical diagnostic reasoning. Presented at the 8th Workshop on Formal and Cognitive Reasoning (FCR), Trier. Link: https://pub.uni-bielefeld.de/record/2964809

Muschalik, M., Fumagalli, F., Hammer, B., Hüllermeier E. (2022). Agnostic Explanation of Model Change based on Feature Importance

Explainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem of explaining model change, i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.

Muschalik, M., Fumagalli, F., Hammer, B., Hüllermeier E. (2022). Agnostic Explanation of Model Change based on Feature Importance. Künstl Intell. doi: 10.1007/s13218-022-00766-6

Finke, J., Horwath, I., Matzner, T., Schulz, C. (2022). (De)Coding Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems

Advances in the development of AI and its application in many areas of society have given rise to an ever-increasing need for society’s members to understand at least to a certain degree how these technologies work. Where users are concerned, most approaches in Explainable Artificial Intelligence (XAI) assume a rather narrow view on the social process of explaining and show an undifferentiated assessment of explainees’ understanding, which mostly are considered passive recipients of information. The actual knowledge, motives, needs and challenges of (lay)users in algorithmic environments remain mostly missing. We argue for the consideration of explanation as a social practice in which explainer and explainee co-construct understanding jointly. Therefore, we seek to enable lay users to document, evaluate, and reflect on distinct AI interactions and correspondingly on how explainable AI actually is in their daily lives. With this contribution we want to discuss our methodological approach that enhances the documentary method by the implementation of ‘digital diaries’ via the mobile instant messaging app WhatsApp – the most used instant messaging service worldwide. Furthermore, from a theoretical stance, we examine the socio-cultural patterns of orientation that guide users’ interactions with AI and their imaginaries of the technologies – a sphere that is mostly obscured and hard to access for researchers. Finally, we complete our paper with empirical insights by referring to previous studies that point out the relevance of perspectives on explaining and understanding as a co-constructive social practice.

Finke, J., Horwath, I., Matzner, T., Schulz, C. (2022). (De)Coding Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. doi: https://doi.org/10.1007/978-3-031-05643-7_10

Buschmeier H., et al. (2022). Modeling Feedback in Interaction With Conversational Agents - A Review

Intelligent agents interacting with humans through conversation (such as a robot, embodied conversational agent, or chatbot) need to receive feedback from the human to make sure that its communicative acts have the intended consequences. At the same time, the human interacting with the agent will also seek feedback, in order to ensure that her communicative acts have the intended consequences. In this review article, we give an overview of past and current research on how intelligent agents should be able to both give meaningful feedback toward humans, as well as understanding feedback given by the users. The review covers feedback across different modalities (e.g., speech, head gestures, gaze, and facial expression), different forms of feedback (e.g., backchannels, clarification requests), and models for allowing the agent to assess the user's level of understanding and adapt its behavior accordingly. Finally, we analyse some shortcomings of current approaches to modeling feedback, and identify important directions for future research. Full article

Axelsson A., Buschmeier H. and Skantze G. (2022) Modeling Feedback in Interaction With Conversational Agents - A Review. Front. Comput. Sci. 4:744574. doi: 10.3389/fcomp.2022.744574

2021

Rohlfing, K. J., Cimiano, et al. (2021). Explanation as a social practice: Toward a conceptual framework for the social design of AI systems

The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee’s understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee’s understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.

Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier, H., Esposito, E, Grimminger, A, Hammer, B., Häb-Umbach, R., Horwath, I., Hüllermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A., Schulte, C., Wachsmuth, H., Wagner, P. & Wrede, B. (2021). Explanation as a social practice: Toward a conceptual framework for the social design of AI systems. IEEE Transactions on Cognitive and Developmental Systems 13(3), 717-728. doi: 10.1109/TCDS.2020.3044366