Germany's path to trustworthy AI: 1st version of the MISSION KI voluntary AI minimum standard
Berlin, 23 October 2024 - Norms and standards are crucial for digital progress and for building trust in the use of new technologies. The EU AI Act therefore defines strict requirements for high-risk systems. However, many AI developments pose a low risk. MISSION KI has developed a voluntary minimum standard to provide these companies with equal guidance for the implementation of trustworthy AI. An initial version was presented at the German government's Digital Summit in Frankfurt on 21 October 2024.
The MISSION KI minimum standard focusses on AI applications below the high-risk threshold and is compatible with the requirements of the EU AI Act. The minimum standard thus aims to ensure quality and trustworthiness without inhibiting innovation. In order to make the minimum standard testable, it translates abstract quality values into concrete test procedures. On the one hand, the aim is to strengthen user confidence in AI technologies. On the other hand, it is intended to create competitive advantages for AI providers who apply the standard and reliability for AI operators who use the AI components. To this end, MISSION KI is working with partners PwC Germany, TÜV AI.Lab, VDE Association for Electrical, Electronic & Information Technologies, CertifAI, AI Quality & Testing Hub and the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS.
The new minimum standard offers clear advantages: AI providers can benefit from an efficient proof of quality that can be used by large companies and start-ups alike. As companies that use the standard stand out positively on the market thanks to comparable quality criteria, they can improve their competitiveness. In turn, AI operators benefit from greater market transparency and reliability of the AI applications used.
Manfred Rauhmeier, Chairman of the acatech Foundation and Secretary of the acatech Coordination Committee: ‘We very much welcome the fact that MISSION KI provides providers with a standard that gives them the opportunity to align their AI applications in a trustworthy manner at an early stage of the development process, “trustworthy by design” so to speak.
This gives them proof of quality and enables them to position themselves on the market.This is an important step that will strengthen the competitiveness of our companies.’
Hendrik Reese, PwC: ‘While the EU AI Act largely creates (legal) certainty for AI for the first time, companies are often faced with the question of what the requirements actually mean.Providing an orientation framework here at an early stage will create momentum for the AI market in Europe.Adoption is now the key to realising the economic potential. Testing and proof of trust in the technology value chain also make an important contribution.’
The MISSION KI minimum standard is based on the ‘Ethical Guidelines for Trustworthy AI’, which were defined by the High-Level Expert Group ‘HLEG AI’, convened by the European Commission. The expert group has defined key principles that should be used to assess the trustworthiness of AI systems. Addressing these principles, which include reliability, AI-specific cybersecurity, data quality, protection and management, non-discrimination, transparency and human oversight and control, ensures that the MISSION KI minimum standard is compatible with European AI regulation and standardisation. In a next step, the team will test the standard and the associated test criteria and procedures using AI applications that are already in use. Testing will ensure that the standard is developed in a needs-orientated and practical manner.The development and testing of the standard should be completed by the end of 2025.