Skip to content
News from

How Research Becomes Practical AI Quality Criteria

The development of the MISSION KI Quality Standard has been strongly shaped by research – particularly by Fraunhofer IAIS, which has been working for many years on the assurance, evaluation, and quality assessment of AI systems.

In this interview, Dr. Maximilian Poretschkin explains how scientific insights were translated into practical assessment criteria, why trustworthy AI can indeed be measured, and what role the standard may play in the emerging European AI landscape.

Dr. Poretschkin, you and your team at Fraunhofer IAIS played a key role in shaping the MISSION KI Quality Standard. What was your contribution, and what do you see as the most important input from research?

We have been working on the assurance and eWe have been working on the assurance and evaluation of AI systems for many years. Four years ago, we published one of the first Assessment Catalogues for AI systems, which served as an important foundation for MISSION KI. In the project, we contributed the latest scientific findings to the development of the Quality Standard and translated them into practical, applicable assessment criteria.

The standard translates principles such as fairness, transparency, and robustness into measurable criteria. How do you approach this translation from a scientific perspective?

These are central quality dimensions of AI that determine whether a system will operate reliably and successfully in practice. However, depending on the system and its context of use, these concepts lead to different, very specific requirements. With the VCIO framework, these quality dimensions can be broken down into concrete observables that can be clearly assessed for a given AI system.

How do you bridge the gap between scientific theory and practical application in companies or testing bodies? How does the standard connect research and practice?

As mentioned earlier, the standard is grounded in the current scientific state of the art. At the same time, it is designed to be applied with minimal effort across the entire range of AI systems. To validate this, we conducted a series of pilot assessments to ensure practical feasibility.

What does “trustworthy AI” mean to you – and how close are we to actually measuring it?

You can unpack this by looking at the term “trustworthy”: a system must be worthy of trust. That means it is designed to operate safely and without unjustified discrimination, depending on the context. For trust to emerge, these qualities must also be communicated appropriately to the relevant stakeholders. With the VCIO approach developed in our project, these aspects of trustworthiness become measurable.

How far along is Europe in harmonizing AI standards, and what is still needed to establish consistent assessment processes?

The harmonized standards that will operationalize the European AI Act are currently being developed at high speed. Producing so many essential standards in such a short time is a real challenge. Even once this phase is complete, much will remain to be done—for example, sector-specific standards or conformity assessment schemes.

If you look three years ahead, what will the landscape of AI assessment and certification look like?

Within the next three years, the European AI Act will be fully in force. In the high-risk area, it will require conformity assessments, which in some cases will also mean formal certification. In parallel, sector-specific standards and conformity assessment schemes will emerge. The MISSION KI Quality Standard provides an important foundation for all these developments.

What impressed or influenced you most during the work on the standard?

The collaboration within the project. The high level of interdisciplinary work, the diversity of perspectives and expertise, and the genuine enthusiasm for the topic were truly impressive.

Where do you currently see the biggest open questions in the assessment and certification of AI systems?

The MISSION KI Quality Standard has laid an important foundation. But there is still much to do—particularly for highly critical AI systems that continue learning in real time. And there is also a great deal of work ahead in the domain of agentic AI.

More news