AI is...

If you type “AI is” into a search engine, you will automatically get suggestions. It is not uncommon for prejudices to appear here, which are categorized by our researchers.

This is a portrait foto of Prof. Dr. Katharina J. Rohlfing.
Prof. Dr. Katharina Rohlfing TRR 318 Speaker and Project Leader A01, A05, Z
AI is... only something for computer scientists

AI systems are programmed by computer scientists, that is true. But AI is used in many areas, for example in medicine. For many users, AI systems are often complex and opaque. The findings from our research contribute to better understanding explanation processes and thus to making AI more interactive and comprehensible. The first step here is to investigate what constitutes comprehensible explanations between humans, but also between humans and machines. Our goal is to ensure that AI takes into account user reactions and questions in its explanations. For this task, we have established a broad interdisciplinary team at TRR 318, including researchers from linguistics, psychology, media studies, economics, and computer science.

This is a portrait foto of Prof. Dr. Britta Wrede.
Prof. Dr.-Ing. Britta Wrede Project Leader A03, A05, Ö
AI is... my doctor's new colleague

Doctors make decisions about people's lives: Diagnosing a disease or setting up a treatment plan is part of everyday medical practice. By saying that we want to use artificial intelligence (AI) in medicine, we do not intend to replace doctors with AI and let them make their own decisions. Rather, AI should assist doctors by providing additional information or pointing out possible medical conditions. Otherwise doctors should also be able to ask the AI questions, so that both the AI and the medical staff work out solutions together. However, in the end, the medical decisions will always have to be made by people.

This is a portrait foto of Prof. Dr. Philipp Cimiano.
Prof. Dr. Philipp Cimiano Deputy Speaker and Project Leader B01, C05, INF
AI is... a lot of work

ChatGPT and other generative models suggest that AI systems just 'work.' However, in any application where an AI is to be used reliably and robustly within a given framework, the AI needs to be trained, monitored, and maintained. Therefore, we must understand what users need and how they interact with the AI. We need to develop a strategy to efficiently generate training data for the AI, with humans providing the expected solution. Suitable models, algorithms, and evaluation functions must be selected to train the AI. The trained model must be evaluated on so-called 'unseen' data to determine the AI's robustness, generalisability, and effectiveness before it is deployed. In the so-called life cycle of AI, we also need to monitor the performance of the AI constantly. The AI must be regularly retrained to avoid specific errors or to adapt to temporal changes in the data, the application context, or the use. This is called 'training on the job' for AI.

This is a portrait foto of AOR. Dr. Illona Horwath.
Jun.-Prof. Dr. Ilona Horwath Former Project Leader B03, Ö
AI is... ethically challenging

Currently, AI systems are being introduced top-down in many areas at a breathtaking pace. This means that a central authority decides and all subordinate areas follow. However, AI systems often have a considerable political and social impact. We can't even properly assess the implications yet, and they are hardly considered in the current hype. One of the biggest ethical challenges at the moment is the rapid proliferation in the absence (yet) of regulation and evaluation of these systems. Our goal is therefore to develop AI systems in an interdisciplinary and co-constructive way. This means that we want to work closely with future users, adapt the systems to them and evaluate them together. In this way, we are building the potential for a democratically legitimized social use of AI systems. On the other hand, it allows us to contribute expertise, practices and socially relevant requirements to the technical design of the systems. Ultimately, we can also use this to better tap the technical development potential of AI systems.

This is a portrait foto of Prof. Dr. Hendrik Buschmeier.
Prof. Dr.Hendrik Buschmeier Project Leader A02
AI is... often incomprehensible

We all interact with “artificial intelligence” on a regular basis. For example, it recommends “books I might like”, decides that my smartphone should stop charging at 80 per cent and continue later, shows me advertisements for things I talked about in the café yesterday, and answers my questions on demand. Some of the actions of AI-driven technology are understandable (I've already bought an ebook by this author), some are not (is my smartphone listening to my conversations in the café?). Understanding AI is not always important (I don't care as long as my smartphone is fully charged tomorrow morning), but it is often important for appropriate and responsible use (can I trust the answers of a large language model?). Explainable AI aims to make artificial intelligence understandable to users, and to facilitate the process of understanding by providing explanations. Ideally, both knowledge of how the AI works (comprehension) and knowledge of how the AI can be used (enabledness) are gained. These two forms of understanding are intertwined and interdependent, and are the only way to enable a conscientious and responsible use of AI in one's own actions, and to give us agency.

This is a portrait foto of Prof. Dr. Dan Verständig.
Prof. Dr. Dan Verständig Professor of Educational Science at Bielefeld University, TRR-Associate
AI is... there to do my homework

Although AI can undoubtedly also be useful for doing homework, the importance of the technology and its potential is far greater than just doing homework. AI can be used to solve complex problems, support medical diagnoses and drive innovation. Nevertheless, many teachers in schools and universities are currently focusing on digital technologies and AI. The question is often how AI can be used specifically to shape learning processes or change the existing examination culture. If you want to solve tasks, including homework, then this first requires an acknowledgement of the problem; this awareness of a problem and the understanding of one or more approaches to solving it are a fundamentally human challenge. Homework promotes the development of independence and personal responsibility. Students must learn to use their time effectively and manage their tasks independently, which are important skills for later life. Homework should also be an important tool for tracking students' learning progress and providing individualized feedback. Among other things, this results in a relationship of mentoring and trust between teachers and students. If this process is outsourced, it has an impact on personal relationships and the pedagogical relationship. A skillful, meaningful and innovative integration of AI requires clear rules, an awareness of ethical dimensions in dealing with the technologies and targeted integration into didactic concepts. In order for students to learn how to use new technologies well, teachers also need suitable training opportunities.

This is a portrait foto of Prof. Dr. Kirsten Thommes.
Prof. Dr. Kirsten Thommes Project Leader A03, C02
AI is... my biggest competitor

The discussion that people regularly feel threatened by technologies (computers, robots, AI) is actually quite interesting, because with every new technology we seem to have to reassure ourselves about what makes us special. It's fascinating that we don't know this, and instead believe we have to explain and justify it again and again, for example by saying ‘but this technology can't be creative‘, ‘but this technology can't feel‘ or ‘but this technology has too little knowledge about the world‘. This is particularly noticeable in workplaces where people have already feared being replaced by all kinds of technologies, because technology is replacing individual tasks or sometimes even entire jobs. The fact is, however, that all jobs are constantly changing and new professional fields are emerging. For example, when computers were introduced, no one thought that a large number of people would be working as app developers. It is certain that AI will change the way we work. In many cases, what we do will probably also change, but more in terms of the division between humans and AI and not in such a way that humans and AI are in competition with each other for exactly the same tasks.

Prof. Dr. Benjamin Paaßen Co-organizer of the TRR Conference and Associated TRR Member
AI is... universally applicable

The chat interface of a language model suggests almost unlimited possibilities. We can ask any question on any topic and get an answer from the language model. This general applicability appears to be a dramatic advance over earlier AI methods and systems, where we were often confronted with error messages or clearly absurd answers as soon as we went beyond the training data. In reality, however, the superficial plausibility of the answers given by language models often leads us astray. We attribute knowledge, understanding, and even empathy to models that they do not have. To achieve appropriate answers in a given context - be it help with homework, a scientific question, or even a medical diagnosis - knowledge and understanding of that context is required, both by humans and by the AI system. Working out the relevant parts of a context together is an essential part of our vision for co-constructive explanations in TRR 318 and at the 3rd TRR 318 Conference: Contextualizing Explanations.