Re­search­ers and guests dis­cuss dis­crim­in­a­tion caused by ar­ti­fi­cial in­tel­li­gence

There are many examples of artificial intelligence (AI) discrimination: automatic facial recognition, for instance, responds best to faces of men with light skin. Other examples range from Amazon’s development of an algorithm to evaluate applications to job ads, which was found to discriminate against women, and an algorithm designed to calculate offenders' likelihood of recidivism, which was discovered to rate white people more positively and people of color more negatively.

Discrimination is one of the most pressing social consequences of the use of AI. In his talk last Thursday afternoon, Professor Dr. Tobias Matzner of the Institute of Media Studies at the University of Paderborn showed that preventing discrimination through AI first requires proper understanding. In particular, he argued, discrimination is more than a simple bias in the data. According to Matzner, explanations of how AI works can help prevent discrimination on the part of AI because explanations consider different social contexts. After the talk, there was an opportunity for participants to ask questions and share their thoughts.

The first public lecture organized by TRR 318 was attended by the Transregio scientists and about 40 guests from universities, schools, political parties and associations, among others.

Information on future public lectures on our events page.

This is a portrait foto of Prof. Dr. Tobias Matzner.
Prof. Dr. Tobias Matzner