Skip to content
Event on

Explainable AI (XAI): From Theory to Transparent AI Systems - Workshop im IQZ Kaiserlautern

This image was generated with AI

The workshop 'Explainable AI (XAI): From Theory to Transparent AI Systems' at IQZ Kaiserslautern is the second workshop in a series that examines transparency as a foundation for the trustworthiness of AI systems from different perspectives. Explainable AI (XAI) plays a crucial role in making the decision-making processes of modern AI systems, often referred to as 'black boxes,' more understandable and thus strengthening trust in their application.

In addition to good documentation of the AI system, a comprehensible explanation of the decision paths is an important basis for compliance with regulatory requirements and for building trust in AI applications. However, the use of explainable AI offers further advantages for developers and users, both during system development and beyond. In the workshop, participants receive practical insights into relevant XAI methods, their potential in various areas, current challenges and trends, and shows what successful application of explainable AI can look like. The workshop is aimed at executives, entrepreneurs, AI developers from industry, as well as users of AI systems.

Agenda: 

  1. Breaking the Black Box: How XAI Builds Trust beyond Transparency 

  2. From Theory to Practice: Navigating the Landscape of XAI Techniques

  3. When Explanations Lie: Metrics for Robust & Actionable XAI 

Event Details

Date

27. March 2025

Time

1:45 pm - 5:30 pm

Place

Innovations- & Qualitätszentrum, Trippstadter Str. 122, 67663 Kaiserslautern

Language

English

Target Audience

Executives, entrepreneurs, AI developers from industry, as well as anyone working with or implementing AI systems

Speaker

Ludger van Elst (DFKI Kaiserslautern) 
Adriano Lucieri (DFKI Kaiserslautern) 
David Dembinsky (DFKI Kaiserslautern) 

Format

Interactive - bring your laptop

Register now

More news