Meet the team behind our mission.
Learn about our values, our partners, and the people bringing ideas into action.
Our Mission
The aim of the CERTAIN consortium is to work across the value chain from basic research to society, focusing on the development, optimization and implementation of Trusted AI techniques to provide guarantees and certifications for AI systems in specific use-cases.
The goals of CERTAIN are 4-fold:
First, CERTAIN drives to conduct excellent research on diverse aspects of trustworthiness of AI.
Second, the consortium collaborates with industry, standardization bodies and political and societal stakeholders to set certification requirements, define AI trust labels, foster AI literacy and apply trustworthiness aspects to real-word problems and applications.
Third, CERTAIN aims to defragment the European research and application landscape for trustworthy AI, bringing together stakeholders of all kinds that are interested in trustworthy AI technology. Forth and last, CERTAIN also aims to provide communication channels both between its stakeholders and with the public, e.g. by promoting conferences, workshops, and collaboration opportunities that exist around the research on and the use of trustworthy AI technologies.
After establishing itself as a local and regional focal point in Phase 1 and expanding to become a widely visible European lighthouse centre in Phase 2, CERTAIN plans for further growth in the future through the addition of new partners and thematic pillars, as well as the federation of multiple lighthouse centres.
Become part of the CERTAIN initiative by joining the consortium; reach out to us to get involved.
Executive Board

“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”
Dr. Kevin Baum

Dipl.-Psych Sabine Feld

“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”
Dr. Inform. André Meyer-Vitali

“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”
Dr.-Ing. Christian Müller

“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”
Dr. Simon Ostermann

“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”
Dr. rer. nat. Patrick Schramowski
Advisory Board
Prof. Dr. Kristian Kersting

“Our research focuses on enabling Trusted AI by developing methods that provide formal and empirical guarantees for AI systems — not only in terms of functionality but also fairness, transparency, and robustness. We explore complementary approaches including guarantees by design, systematic testing, transparency, and human-AI interaction, to move beyond black-box models toward accountable and reliable AI.”
Prof. Dr.-Ing. Phillipp Slusallek
Prof. Dr. Verena Wolf
Principal Investigators

“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”
Dr. Kevin Baum

“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”
Dr. Inform. André Meyer-Vitali

“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”
Dr.-Ing. Christian Müller

“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”
Dr. Simon Ostermann
Dr. Vera Schmitt

“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”
Dr. rer. nat. Patrick Schramowski
Dr. Jonas Wahl
Researchers

“Unraveling the inner workings of pretrained language models: Investigating their reasoning processes, knowledge acquisition, and the emergence of unexpected capabilities.”
Tanja Bäumel M.Sc.

“How can we advance neuro-explicit methods and leverage them to improve the robustness of AI systems, particularly for safety-critical applications in real-world scenarios?”
Timo Gros M.Sc.

“How can interactive AI applications help humans to understand the model’s predictions? How can we obtain guarantees for reasonable and correct model behavior?”
Manuela Schuler M.Sc.

“Researching and developing applied AI solutions that make a habit of two things – help; or at least do no harm.”
Andrea Sipka M.Sc.
Administration
Dr. Joshua Berger

Janina Hoppstädter B.Sc.
Marlies Thönnissen M.A.
Student Assistants
Emily Pöppelmann
