International Conference on “Explaining Machines” at Bielefeld University

Zur Veranstaltungsseite

Abstract zur Konferenz:

Today's most advanced machine learning algorithms are often incomprehensible to humans, including those who designed them. How can we achieve an understandable explanation of their processes? This is currently one of the crucial questions for artificial intelligence projects. Should we be able to explain how machines work, or should the machines learn to explain themselves? The ambiguity in the title of our conference contains the coexistence of two possibilities: machines explaining themselves or humans explaining machines – or maybe both at the same time.

If explaining machines should have a social and political impact, it is not enough that they are understandable to computer science experts. Explaining machines needs to involve different socially situated and diverse humans. The issues are complex and involves multiple skills. Computer scientists who design machines must collaborate with social scientists who study understanding (and the lack thereof), the process of explanation and their conditions. Now more than ever, the challenge of artificial intelligence projects is as much social as technological, and our conference addresses this by stimulating the debate, presenting a the variety of perspectives and insights developed by the social sciences in the context of XAI.