
Who we are
CERTAIN stands for “Center for European Research in Trusted AI” and focuses on an approach that highlights the issue of “trust” in AI systems – an aspect that is often neglected in international research. The aim is to develop new technologies that provide functional and other guarantees for AI systems.
CERTAIN is a consortium, a collaborative initiative involving various partners, legally part of DFKI, focused on researching, developing, deploying, standardizing, and promoting Trusted AI techniques, with the aim of providing guarantees for and certification of AI systems, and it allows for project collaboration with both internal and external partners.

What we do
Current News
CERTAIN participates in German Unity Day Celebrations in Saarbrücken.
From October 2 to 4 around 400,000 visitors explored the German Unity Day celebrations under the motto “Future through Change” in Saarbrücken. The CERTAIN team on site, consisting of Simon Ostermann, Patrick Schramowski and Leon Schall, presented LavaGuard, an interactive exhibit demonstrating how transparency and security can be embedded into visual AI systems. The exhibit […]
Interview with CERTAIN board member Kevin Baum on Causal AI.
In the latest issue of IM+io, CERTAIN board member Dr. Kevin Baum explores in an interview a crucial question for AI development: what do we really see when we focus solely on correlations, and what remains hidden if we ignore causality? AI systems are great at spotting patterns, but that doesn’t mean they understand relationships. […]
André Meyer-Vitali speaks at Trustworthy AI Summit 2025 in Paris.
CERTAIN researcher André Meyer-Vitali will contribute as a speaker in the session “State of the art and industrial needs for Trustworthy AI” at the Trustworthy AI Summit 2025 in Paris. His perspective bridges research and real-world application. The summit will bring together global leaders, industry and academia to shape the future of responsible and certifiable AI.
CERTAIN board member Simon Ostermann co-authored a paper on Advancing Cross-Lingual NLP with Smarter Transfer Methods.
Cross-lingual knowledge transfer, particularly from high- to low-resource languages, remains one of NLP’s most persistent challenges. At NAACL 2025, CERTAIN researcher Simon Ostermann co-authored a paper exploring how parameter-efficient fine-tuning methods can improve multilingual AI systems without massive computational overhead, directly supporting CERTAIN’s mission to make AI inclusive, efficient, and trustworthy across all languages. The […]