Meet the team behind our mission.
Learn about our values, our partners, and the people bringing ideas into action.
Our Mission
The aim of the CERTAIN consortium is to work across the value chain from basic research to society, focusing on the development, optimization and implementation of Trusted AI techniques to provide guarantees and certifications for AI systems in specific use-cases.
The goals of CERTAIN are 4-fold:
First, CERTAIN drives to conduct excellent research on diverse aspects of trustworthiness of AI.
Second, the consortium collaborates with industry, standardization bodies and political and societal stakeholders to set certification requirements, define AI trust labels, foster AI literacy and apply trustworthiness aspects to real-word problems and applications.
Third, CERTAIN aims to defragment the European research and application landscape for trustworthy AI, bringing together stakeholders of all kinds that are interested in trustworthy AI technology. Forth and last, CERTAIN also aims to provide communication channels both between its stakeholders and with the public, e.g. by promoting conferences, workshops, and collaboration opportunities that exist around the research on and the use of trustworthy AI technologies.
After establishing itself as a local and regional focal point in Phase 1 and expanding to become a widely visible European lighthouse center in Phase 2, CERTAIN plans for further growth in the future through the addition of new partners and thematic pillars, as well as the federation of multiple lighthouse centers.
Become part of the CERTAIN initiative by joining the consortium; reach out to us to get involved.
Executive Board

“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”
Dr. Kevin Baum
“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”

Dipl.-Psych Sabine Feld

“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”
Dr. Inform. André Meyer-Vitali
“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”

“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”
Dr.-Ing. Christian Müller
“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”

“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”
Dr. Simon Ostermann
“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”

“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”
Dr. rer. nat. Patrick Schramowski
“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”
Advisory Board
Prof. Dr. Kristian Kersting

“Our research focuses on enabling Trusted AI by developing methods that provide formal and empirical guarantees for AI systems — not only in terms of functionality but also fairness, transparency, and robustness. We explore complementary approaches including guarantees by design, systematic testing, transparency, and human-AI interaction, to move beyond black-box models toward accountable and reliable AI.”
Prof. Dr.-Ing. Phillipp Slusallek
“Our research focuses on enabling Trusted AI by developing methods that provide formal and empirical guarantees for AI systems — not only in terms of functionality but also fairness, transparency, and robustness. We explore complementary approaches including guarantees by design, systematic testing, transparency, and human-AI interaction, to move beyond black-box models toward accountable and reliable AI.”
Prof. Dr. Verena Wolf
“My research focuses on hybrid modeling approaches that integrate mechanistic knowledge into neural architectures to improve robustness, interpretability, and generalization — especially in data-scarce domains like the life sciences. These neuro-mechanistic models enhance trustworthiness by combining domain knowledge with data-driven learning and support the development of AI systems that are more transparent, verifiable, and certifiable — key goals of the CERTAIN initiative.”
Principal Investigators

“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”
Dr. Kevin Baum
“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”

“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”
Dr. Inform. André Meyer-Vitali
“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”

“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”
Dr.-Ing. Christian Müller
“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”

“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”
Dr. Simon Ostermann
“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”
Dr. Vera Schmitt
“Within the XplaiNLP group, we develop interactive, adaptive, and actionable multi-level explainable AI that combines mechanistic interpretability, causal reasoning, and expert knowledge to enable trustworthy decision-support with LLMs and MLLMs in high-risk domains like medical decision-making and disinformation detection.”

“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”
Dr. rer. nat. Patrick Schramowski
“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”
Dr. Jonas Wahl
“How can we use causal knowledge about the world to make AI models more robust under changing circumstances and make their reasoning more understandable to humans?”
Researchers

“Unraveling the inner workings of pretrained language models: Investigating their reasoning processes, knowledge acquisition, and the emergence of unexpected capabilities.”
Tanja Bäumel M.Sc.
“Unraveling the inner workings of pretrained language models: Investigating their reasoning processes, knowledge acquisition, and the emergence of unexpected capabilities.”

“How can we advance neuro-explicit methods and leverage them to improve the robustness of AI systems, particularly for safety-critical applications in real-world scenarios?”
Timo Gros M.Sc.
“How can we advance neuro-explicit methods and leverage them to improve the robustness of AI systems, particularly for safety-critical applications in real-world scenarios?”

“How can interactive AI applications help humans to understand the model’s predictions? How can we obtain guarantees for reasonable and correct model behavior?”
Manuela Schuler M.Sc.
“How can interactive AI applications help humans to understand the model’s predictions? How can we obtain guarantees for reasonable and correct model behavior?”

“Researching and developing applied AI solutions that make a habit of two things – help; or at least do no harm.”
Andrea Sipka M.Sc.
“Researching and developing applied AI solutions that make a habit of two things – help; or at least do no harm.”
Administration
Dr. Joshua Berger

Janina Hoppstädter B.Sc.
Marlies Thönnissen M.A.
Student Assistants
Emily Pöppelmann
