Executive Board

“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”

AI Ethics & Alignment

Dr. Kevin Baum

“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”

Administration, Organization & HR

Dipl.-Psych Sabine Feld

“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”

Trusted Agents, AI Engineering, Neuro-Symbolic AI

Dr. Inform. André Meyer-Vitali

“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”

“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”

AutoSecurity

Dr.-Ing. Christian Müller

“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”

“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”

AI Transparency & Explainability

Dr. Simon Ostermann

“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”

“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”

AI Safety

Dr. rer. nat. Patrick Schramowski

“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”

Advisory Board

Prof. Dr. Kristian Kersting

“Our research focuses on enabling Trusted AI by developing methods that provide formal and empirical guarantees for AI systems — not only in terms of functionality but also fairness, transparency, and robustness. We explore complementary approaches including guarantees by design, systematic testing, transparency, and human-AI interaction, to move beyond black-box models toward accountable and reliable AI.”

Founding Director

Prof. Dr.-Ing. Phillipp Slusallek

“Our research focuses on enabling Trusted AI by developing methods that provide formal and empirical guarantees for AI systems — not only in terms of functionality but also fairness, transparency, and robustness. We explore complementary approaches including guarantees by design, systematic testing, transparency, and human-AI interaction, to move beyond black-box models toward accountable and reliable AI.”

Prof. Dr. Verena Wolf

“My research focuses on hybrid modeling approaches that integrate mechanistic knowledge into neural architectures to improve robustness, interpretability, and generalization — especially in data-scarce domains like the life sciences. These neuro-mechanistic models enhance trustworthiness by combining domain knowledge with data-driven learning and support the development of AI systems that are more transparent, verifiable, and certifiable — key goals of the CERTAIN initiative.”

Principal Investigators

“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”

AI Ethics & Alignment

Dr. Kevin Baum

“How can methods from ethics and practical reasoning enable the responsible design, development, and deployment of AI systems and AI agents, ensuring alignment with human values and effective human oversight – even under conditions of moral and epistemic uncertainty?”

“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”

Trusted Agents, AI Engineering, Neuro-Symbolic AI

Dr. Inform. André Meyer-Vitali

“To fully exploit the opportunities offered by artificial intelligence, corresponding systems must be seamlessly integrated into the socio- technical environment and designed to be trustworthy.”

“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”

AutoSecurity

Dr.-Ing. Christian Müller

“How can we design and validate AI systems that operate reliably, transparently, and ethically in safety-critical domains such as autonomous driving – ensuring they perform robustly under real-world uncertainty and remain accountable in dynamic human-centered environments?”

“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”

AI Transparency & Explainability

Dr. Simon Ostermann

“How can interpretability techniques improve the efficiency, robustness and trustworthiness of Large Language Models (LLMs), while making them more accessible and adaptable to diverse domains?”

Explainable NLP

Dr. Vera Schmitt

“Within the XplaiNLP group, we develop interactive, adaptive, and actionable multi-level explainable AI that combines mechanistic interpretability, causal reasoning, and expert knowledge to enable trustworthy decision-support with LLMs and MLLMs in high-risk domains like medical decision-making and disinformation detection.”

“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”

AI Safety

Dr. rer. nat. Patrick Schramowski

“How can technical safeguards, precise safety specifications, and rigorous verification methods provide high-assurance quantitative guarantees, ensuring that autonomous, general-purpose AI systems reliably avoid harmful behaviors in safetycritical contexts?”

Causality

Dr. Jonas Wahl

“How can we use causal knowledge about the world to make AI models more robust under changing circumstances and make their reasoning more understandable to humans?”

Researchers

“Unraveling the inner workings of pretrained language models: Investigating their reasoning processes, knowledge acquisition, and the emergence of unexpected capabilities.”

Tanja Bäumel M.Sc.

“Unraveling the inner workings of pretrained language models: Investigating their reasoning processes, knowledge acquisition, and the emergence of unexpected capabilities.”

“How can we advance neuro-explicit methods and leverage them to improve the robustness of AI systems, particularly for safety-critical applications in real-world scenarios?”

Timo Gros M.Sc.

“How can we advance neuro-explicit methods and leverage them to improve the robustness of AI systems, particularly for safety-critical applications in real-world scenarios?”

“How can interactive AI applications help humans to understand the model’s predictions? How can we obtain guarantees for reasonable and correct model behavior?”

Manuela Schuler M.Sc.

“How can interactive AI applications help humans to understand the model’s predictions? How can we obtain guarantees for reasonable and correct model behavior?”

“Researching and developing applied AI solutions that make a habit of two things – help; or at least do no harm.”

Andrea Sipka M.Sc.

“Researching and developing applied AI solutions that make a habit of two things – help; or at least do no harm.”

Administration

Dr. Joshua Berger

Janina Hoppstädter B.Sc.

Marlies Thönnissen M.A.

Student Assistants

Emily Pöppelmann

Leon Schall

Our Partners