Great success in Lisbon: At the World Conference on eXplainable Artificial Intelligence, the team of the Transregio subproject C03 "Interpretable machine learning: explaining change" won the award for the best scientific publication. Maximilian Muschalik and Fabian Fumagalli travelled to Portugal for the conference and received the award for their paper "iPDP: On Partial Dependence Plots in Dynamic Modelling Scenarios".
"In our paper, we take an existing method and extend it to cases where models change continuously," explains Maximilian Muschalik. Post-hoc explanation techniques, such as the well-known Partial Dependence Plot (PDP), investigate the effects of features on the decision. They are used in explainable artificial intelligence (XAI) to understand black-box machine learning models. While many real-world applications require dynamic models that constantly adapt over time and respond to changes in the underlying distribution, XAI has mainly considered static learning environments where models are trained once on a dataset and remain unchanged. In the winning paper, Muschalik and Fumagalli develop a novel model-independent XAI framework called incremental PDP (iPDP) that, building on the original PDP, extracts time-dependent feature effects in non-stationary learning environments. "Our method allows the calculations to be efficiently updated with the current model and new data points to respond to changes as quickly as possible," explains Fumagalli. Muschalik explains, "In our paper, we illustrate the effectiveness of iPDP with an example drift detection application and conduct several experiments with real and synthetic datasets and streams."
"Overall, we had a very positive impression of the conference. The presentations were very varied and there were many interfaces with other disciplines, which is yet another confirmation of the TRR's interdisciplinary work," Fumagalli sums up.
The International Conference on Explainable Artificial Intelligence (XAI 2023) took place for the first time in 2023. In the future, it will bring together researchers, academics and professionals every year to promote the exchange and discussion of knowledge, new perspectives, experiences and innovations in the field of explainable artificial intelligence (XAI). The event is multidisciplinary and interdisciplinary in nature and invites scholars* from various disciplines, including computer science, psychology, philosophy and social sciences, as well as representatives from industry interested in the practical, social and ethical aspects of explaining the models that emerge from the discipline of artificial intelligence (AI).