December 8 is Education Day in Germany, an annual day of action that promotes equal educational opportunities. In this context, the question arises as to how modern technologies, especially artificial intelligence (AI), can facilitate access to education and knowledge. TRR 318 spoke with Professor Elena Esposito, a sociology professor at Bielefeld University, about this topic.
How can artificial intelligence help make education and knowledge more accessible worldwide?
Elena Esposito: Artificial intelligence can contribute to greater educational equality worldwide by enabling personalized learning, breaking down language barriers, and providing high-quality learning materials at a low cost. AI-supported translations, adaptive learning platforms, and tutoring systems can create new opportunities, particularly for learners in regions with limited resources.
Where do you see the greatest potential for AI to promote education? What risks should be considered?
Elena Esposito: The greatest opportunities lie in individualized learning paths, targeted support for learners, and barrier-free educational opportunities. AI can relieve teachers of routine tasks, giving them more time for educational work. However, there are also risks. For example, AI can exacerbate existing inequalities through unequal access to devices, the internet, and digital competence. Additionally, there are biases in algorithms, risks to data protection and surveillance, the commercialization of education, and the danger of becoming dependent on a few large tech providers for education.
How do you see the future development of AI in education?
Elena Esposito: In the future, AI is expected to play a larger role in learning platforms, exam formats, and teaching methods. Teaching and learning processes will become more data-driven, personalized, and hybrid. At the same time, there will be a growing need for digital and critical thinking skills so that AI can be used meaningfully and reflectively. Important questions remain: How can we ensure fair and inclusive systems? How do we protect data? How do we maintain educational responsibility?
In this context, how important is it for AI systems to be able to explain their decisions in an understandable way?
Elena Esposito: Explainability is essential for ensuring that learners, teachers, and institutions understand how recommendations and assessments are made. Explainable AI builds trust and reduces the risk of unfair decisions or hidden bias. However, recent research shows that explainability does not necessarily mean transparency because the internal processes of AI systems are often difficult to understand.
Elena Esposito is a sociology professor at Bielefeld University and a project leader in TRR 318. In project B01, her research focuses on the role of explainability in the interaction between humans and AI systems in various areas of society.