“Tech­no­lo­gies like Chat­G­PT have enorm­ous po­ten­tial to change the world”

Sociologist Nils Klowait asked ChatGPT to program a computer game and summarize scientific papers. In an interview, he describes his impressions of ChatGPT and the opportunities and risks that artificial intelligence (AI) systems pose for society. Nils Klowait is a research fellow in the project Ö "Questions about Explainable Technologies" of TRR 318 and in the working group "Technology and Diversity" of junior professor Dr. Ilona Horwath at Paderborn University.

TRR 318: How can ChatGPT change the way people and computers interact?

Nils Klowait: Anyone can use ChatGPT without any special knowledge. If a person knows how to chat, they can probably use ChatGPT. The chatbot is not only capable of analyzing complex queries but also understands follow-up questions or corrections. I recently asked ChatGPT to write the code for a game where a cat has to dodge hotdogs - and it returned a functioning code. Upon further request, ChatGPT also built me a small game world with trees in the background. The result was a mouse-controlled game created by ChatGPT. I have little programming knowledge, and it would have taken me hours to do what ChatGPT made in a few minutes (Link to the game).

Even if my example is quirky, it shows that humans can communicate with artificially intelligent systems in ordinary language. ChatGPT is a relatively mature example of a system that can help us in our work because it can deal with human complexity - it can be responsive and cooperative. It is, therefore, a technology that can be easily integrated into human workflows, from education to high-level policy analysis. The future of work and learning thus needs to be reimagined with AI in mind.

TRR 318: What problems and opportunities could arise if ChatGPT is used in key areas of our society?

NK: The technology behind it can easily pose as an expert, quickly produce fake news, and manipulate millions of people through comments, emails, and subtly altered Wikipedia articles. Unless we invest in AI-literacy, teaching users to understand the workings, limitations, and biases of AI systems, we will likely not be able to respond to these challenges. Even if we assume an AI-savvy population, we would still need a robust, flexible, and responsive regulatory framework to complement it. In short, we need to build the infrastructure for a democratic and ethical AI citizenry.

With this framework in place, ChatGPT-like technologies have enormous potential to change the world. We are taking a big step towards making the vast amount of data on the internet available to ordinary people, not just companies like Microsoft or Google. More and more people could use it to do more complex, abstract, and specialized tasks. It breaks down language and professional barriers and can give a creative voice to previously disadvantaged groups. But the systems are not publicly accessible - they are owned by powerful companies that can decide to restrict access to their systems to specific groups at any time. There needs to be greater awareness of how these systems can serve opaque interests and how their design can shape the way we work, learn, and communicate soon.

TRR 318: Can ChatGPT also increase trust in AI?

NK: ChatGPT is not perfect. The recent scandals surrounding the implementation of ChatGPT in the Bing search engine show this: it can make demonstrably false claims about the world and even become verbally abusive when asked specific questions. Many users have probably already realized that ChatGPT is not always reliable. For example, I asked it to summarise a scientific paper I was familiar with, and it did an admirable job of outlining the text accurately and concisely. However, when I asked it to summarise a colleague's paper, ChatGPT made up the summary and the supporting citations. I can only guess the reasons for this.

In other words: ChatGPT's greatest strength is also its weakness: the only way to interact with the system is to treat it as a conversation partner and ask questions. But how can I trust the explanations of a system that has proven to be unreliable and inconsistent? Even if humanization makes the system seem more trustworthy, it is not accountable to its users. I therefore doubt that its conversational nature can be seen as a step toward transparency. However, we can already see that the industry is aware of the problem: Microsoft is now trying to be more transparent about how ChatGPT proceeds, showing the specific queries it searched for, and citing supporting sources. That said, we are still a long way from a satisfactory solution. I find an AI that appears trustworthy and produces untruths more problematic than a system that is obviously untrustworthy.

Further Information:

This is a portrait foto of Nils Klowait.
Nils Klowait, researcher in project Ö
Programming computer games with ChatGPT - Nils Klowait tested it.