Trusted AI at AI Action Summit in Paris

On 6-11 February CERTAIN Advisory Board Members Prof. Philipp Slusallek and Prof. Kristian Kersting, joined by Prof. Antonio Krüger and Prof. Andreas Dengel from DFKI attended the AI Action Summit in Paris.

Excerpts from the DFKI press release:

At the AI Action Summit in Paris, 6-11 February EU Commission President Ursula von der Leyen has sent a strong signal by announcing the mobilisation of 200 billion euros for trustworthy AI: Europe recognises the strategic relevance of trustworthy AI and wants to position itself as a global and AI-sovereign shaper.

While the USA is pursuing a largely unregulated innovation dynamic and China is establishing a centralised AI model, the EU is focusing on a third path: excellence through collaboration, openness and strong guarantees of trust. This view is underpinned not only by political decisions, but also by scientific and industrial players who are laying the foundations for a resilient European AI infrastructure with their initiatives and investments.

One key European alternative lies in neuro-explicit AI systems that combine generative processes (neuro) with a formally provable approach (explicit). The latter focuses on explicit knowledge representations and logical conclusions, enabling self-explanatory and verifiable decision-making processes. These characteristics are particularly important in the context of European AI regulation, as the AI Act explicitly calls for transparency and traceability as the cornerstones of trustworthy AI. This symbiosis offers Europe the opportunity to develop robust and trustworthy AI solutions that meet the high requirements of the AI Act while remaining competitive with US and Chinese developments.

Driving this vision forward requires not only technological innovation, but also the establishment of a uniform standard for trustworthy AI. The European approach goes beyond mere technical specifications and focuses on the integration of ethical and legal principles in the development of AI systems. This includes methodological concepts that ensure transparency, traceability and fairness.

A key component is the development of robust mechanisms that enable proof of the trustworthiness of AI systems. Standardised test procedures and certification processes play a central role here. Safety-critical applications in particular, such as the use of AI in autonomous systems or in medical diagnostics, require evidence-based guarantees for their functionality. Only through a transparent and objective assessment framework can social trust be created in the long term. A key objective is not only to define the principles of trustworthy AI, but also to establish them as a binding standard at European and global level.

[Original press release: https://www.dfki.de/web/news/ai-action-summit-europas-weg-zur-vertrauenswuerdigen-ki]