How can different social groups be involved in discussions about artificial intelligence technologies? “First of all, by learning to talk about it and ask questions,” says Dr. Nils Klowait from the Ö project of TRR 318. The Co-Construction Workshops (CCWS) are one platform for this. They were developed in the first funding phase (2021-2025) of TRR 318 and tested with a variety of groups.
Developing a Common Language for AI
The CCWS aims to enable people from all walks of life to critically, personally, and productively engage with artificial intelligence. Rather than imparting expert knowledge in a top-down manner, the workshops create a space where participants can share their perspectives, experiences, and questions. “The workshop participants are experts in their own lives and work. We simply open up a space where this expertise can be related to AI,” Klowait explains.
Therefore, the workshops were not designed as technical training but as a platform for co-constructive knowledge transfer. The goal is to develop a shared language for discussing AI that goes beyond marketing terms and dystopian narratives.
Different Formats for Different Groups
The CCWS are embedded in the public outreach project of TRR 318, which was led in the first funding phase by Prof. Dr. Ilona Horwath, Prof. Dr. Carsten Schulte and Prof. Dr.-Ing. Britta Wrede, aimed to engage with different publics. The research associates, technology sociologist Dr. Nils Klowait and computer science educator Michael Lenke, therefore developed workshop formats for highly diverse target groups: school classes and teachers, university students, senior citizens, as well as local and international organizations and companies. The workshops were structured in a correspondingly modular way.
Michael Lenke's workshops for school classes were designed to convey basic AI concepts in an understandable and accessible way. Nils Klowait's “stakeholder workshops” were aimed at groups that actively use or are affected by AI technologies and had a socio-critical focus. “The formats start with a low threshold and build on trust and mutual expertise,” says Klowait. “This is because the participants often have years of professional experience of their own; it allows them to provide valuable and specific insights into the risks, opportunities, and impacts of AI.”
For example, at the beginning of a workshop with the fire department, the research team made it clear that they did not have expertise in firefighting. The participants contributed their professional experience and, together with the workshop leader, developed perspectives on how AI could support and influence their work.
Learning Experiences with New Questions
The workshops began with an examination of commonly used terms. What exactly is AI? What is automated decision support? Then, participants had the opportunity to test AI systems for themselves. For example, they tested a version of ChatGPT that always lies and a version that deliberately responds rudely. This allowed them to experience firsthand how AI behavior depends on roles, rules, and contexts.
The team received feedback immediately after the workshops, often accompanied by new questions. “Many participants reported afterwards that they now have a more critical but realistic view of AI, a new willingness to ask questions, and the ability to continue the discussion on their own,” says Klowait.
Contribution to Research in TRR 318
Many participants shared their experiences with AI systems, which could be used anonymously in several TRR 318 publications. The data revealed that explanatory processes are dynamic, meaning people adapt how they interact with AI over time. Another factor that played a role was the ability of AI systems to perceive physical reactions, which current systems such as ChatGPT are unable to do.
In the second funding phase, media educator Professor Dan Verständig will further develop the workshops. Klowait looks back on inspiring years: “Our participants actively helped shape and influence the workshops. I hope that the experiences from the workshops will continue to lead to open and constructive discussions about AI in the future.”