Skip to content
News from

On the Path to Trustworthy AI in Medicine: Holistic Quality Assurance from Development to Auditing

This image was generated with AI

As a project partner in Pillar 2 of MISSION KI, DFKI has conducted research on important fundamentals for the long-term safe and trustworthy use of artificial intelligence (AI) in high-risk areas such as medicine.

Trust is essential for the safe use of AI in high-risk areas such as medicine. Compliance with regulatory requirements such as the EU AI Act forms its foundation. However, the long-term, safe use of trustworthy AI systems requires technically excellent solutions across all phases of the AI life cycle – from development to testing.

The ever-increasing complexity of modern AI systems and the multitude of potential deployment scenarios make the scalable and clear implementation and testing of these requirements considerably more difficult. The technically sound demonstration of various dimensions of trustworthiness is complex: assessments are context-dependent, thresholds must be defined meaningfully, and unpredictable use cases of AI in the real world lead to high testing costs.

Complementing the MISSION KI quality standard, the German Research Center for Artificial Intelligence (DFKI) has made various contributions within Pillar 2 of MISSION KI to pave the way for trustworthy medical AI and to simplify the auditing of these dimensions. The project has developed conceptual foundations for two central platforms for creating and verifying the trustworthiness of AI systems – the Quality Platform and the Test Platform.

Both platforms complement the quality standard in terms of supporting high-risk applications and partially automating risk assessment and AI debugging. Development and testing are based on real-world use cases from medical fields such as dermatology, oncology and psychotherapy, addressing the practical challenges of building trustworthy AI systems in highly regulated environments.

More infos here.

More news