Trusted AI Day 2026 – Shaping Trust in Artificial Intelligence

Date: Thursday, 05 February 2026
Venue: VisCenter, DFKI Saarbrücken, Saarland Informatics Campus
Time: 1:00 pm – approx. 5:00 pm
Format: In-person
About the Event
How can Artificial Intelligence be designed to be trustworthy, secure, and responsible?
At the 2nd Trusted AI Day 2026, experts from research, industry, government, and civil society will come together to discuss strategies for building and implementing Trust in AI. Organized by the German Research Center for Artificial Intelligence (DFKI) and CERTAIN, the event highlights the latest developments and implementation efforts in the field of trustworthy AI.
Program Highlights
- Opening remarks from leaders in AI and digital policy
- Insights into the progress of CERTAIN
- Three keynote speeches: Regional, national, and international perspectives on Trusted AI
- Two panel discussions: Deep Dive & Implementation
- Networking and open dialogue with AI stakeholders
Agenda
| Time | Session |
|---|---|
| 13:00 – 13:10 | Opening & Welcome Reinhard Karger, DFKI |
| 13:10 – 13:30 | Welcome & Introduction Philipp Slusallek, DFKI Jürgen Barke, Minister for Economic Affairs, Innovation, Digitalisation and Energy |
| 13:30 – 13:45 | CERTAIN Update – Progress and Outlook Kevin Baum, DFKI |
| 13:45 – 15:15 | Keynotes: Challenges & approaches to test AI systems in practice Antoine Gautier, QuantPi Guardrails for Trusted AI: secure & transparent design of AI Vera Sikes, Federal Office for Information Security Privacy in the Era of Large Language Models: Risks, Challenges, and the Road Ahead Raouf Kerkouche, inria |
| 15:15 – 15:45 | Coffee Break & Networking |
| 15:45 – 16:30 | Panel 1: Trusted AI – Deep Dive André Meyer-Vitali (Moderator) – Maximilian Poretschkin, Fraunhofer IAIS, – Sarah Sterz, Saarland University – Wico Mulder, TNO – Saqib Bukhari, ZF – Bertrand Braunschweig, European Trustworthy AI Association Panel 2: Trust in AI – Ways to Implementation Philipp Slusallek (Moderator) – Freek Bomhof, TNO – Philip Piatkiewicz, Adra – Nicolas Rebierre, European Trustworthy AI Association (ETAIA) – Mathias Sander, TÜV AI Lab |
| 16:30 – 17:00 | Wrap-up & Summary |
| 17:00 – open end | Get-together & Networking Reception |
Opening Speaker

After successfully completing his degree in public administration (FH) at the Saarland University of Applied Sciences, Jürgen Barke began his professional career in 1986 as a clerk in the higher non-technical service, most recently as personal advisor to the then State Secretary. From 1991 to 2001, he was full-time deputy mayor of the city of Lebach. In February 2001, he joined Michels GmbH as head of human resources and authorized signatory. At the end of 2002, he founded Jürgen Barke Consult Lebach and became sole managing director of KomCon GmbH at the beginning of 2003. In May 2012, Jürgen Barke was appointed State Secretary in the Ministry of Economic Affairs, Labor, Energy, and Transport. Since April 2022, he has been Minister in the Ministry of Economics, Innovation, Digital Affairs, and Energy and Deputy Prime Minister of Saarland.
Keynote Speakers

Dr. Antoine Gautier is chief scientist and co-founder at QuantPi. He has been working on technical assessments of AI systems for more than ten years. Antoine is a Mathematician by training and did his PhD in a Machine Learning group. His academic work has been published at leading venues and he is an active contributor to multiple standardization committees in context of quality assurance for AI systems. As part of his responsibilities at QuantPi, Antoine serves as principal investigator for various grant and tender projects at German national and European level.

Dr. Raouf Kerkouche is a tenured research scientist at Inria, with over eight years of experience working on trustworthy AI. He conducted postdoctoral research at CISPA Helmholtz Center for Information Security, where he focused on security and privacy in machine learning. Before that, he completed his PhD at Inria Grenoble, investigating differentially private federated learning and the trade-offs between privacy, security, bandwidth efficiency, and utility. His current research program is organized around four key axes: privacy-preserving new foundation models, safe and secure new foundation models, collaboration in the era of new foundation models, and limiting the proliferation of deepfakes and misinformation. His results have been published in leading international venues such as NeurIPS, ICLR, UAI, and PoPETs.

Dr. Vera Sikes is Head of Division in the Technology Strategy and Information Technology Department at the Federal Office for Information Security (BSI). After studying administrative management and electrical engineering and working in IT operations for many years, she has been working since 2014 on the development and establishment of knowledge management systems for cyber security issues in security authorities. She joined the BSI in March 2022, where she works with the competence centres for security in cloud computing and artificial intelligence to identify technical changes and develop appropriate solutions with a view to ensuring information security.
Panelists

Dr. Freek Bomhof works within TNO as the program director of Appl.AI, a programme that concentrates on the research for Trustworthy AI. He has been involved in Horizon Europe projects such as TAILOR and VISION and is heavily engaged in the development of the GPT-NL language model. Additionally, he functions as a board member of the Big Data Value Association (BDVA) and is one of the driving forces behind the organization of the Trustworhty AI Summit 2026, which is scheduled to be held in the Netherlands.

Dr. Bertrand Braunschweig is an independent consultant and provides scientific support to various organizations, including serving as the scientific coordinator of the European Trusworthy AI Association, following his coordination of the scientific part of the Confiance.ai program. He has been co-editor of the forthcoming European standard on AI risk management, in support of the European AI Act, and co-chair of the series of ATRACC (AI Trustworthiness and Risk Assessment in Challenged Contexts) AAAI symposia. Following a career as a researcher and project manager in simulation and AI in the field of energy, Bertrand previously headed the Informations and Communications (ICT) department of the Agence nationale de la recherche (ANR), two Inria research centers (Rennes then Saclay), produced two editions of the Inria white paper on AI and coordinated the Research component of the national artificial Intelligence strategy.

Dr. Syed Saqib Bukhari is the Chief AI Engineer at ZF Group’s AI Lab in Saarbrücken, where he leads the development of scalable and trustworthy AI systems for safety-critical automotive and manufacturing applications. He earned his PhD in Artificial Intelligence from the German Research Center for Artificial Intelligence (DFKI) and brings nearly 20 years of experience in AI research, innovation, and industrial deployment. His work focuses on bridging deep learning, multimodal perception, and human-centric evaluation to create robust, interpretable, and production-ready AI systems.

Dr. Wico Mulder is a senior scientist at TNO, holding an MSc in Physics from the University of Groningen and a PhD in Computer Science from the University of Amsterdam. Affiliated with the University of Groningen, where he contributes to advancing cognitive AI systems, Wico brings an industrial background in consultancy and software engineering, combining academic research with practical applications across sectors such as healthcare and energy sustainability. With Human–AI interaction as a central theme of his work, he actively engages stakeholders in diverse projects, translating academic insights into practical solutions that enhance business performance and promote human well-being.

Dr. Philip Piatkiewicz is a seasoned European affairs and project management professional, with a wealth of knowledge in Technology, Innovation and Research policies. He has significant international experience coordinating large, complex collaborative projects and is experienced in strategic planning, supporting organisations position themselves in the delivery of future operational objectives. Currently he is Secretary General of the Ai, Data and Robotics association, where he coordinates the private side ecosystem of the European Partnership on AI, Data and Robotics, one of the European Partnerships in Cluster 4 (digital, industry, and space) in Horizon Europe. The partnership was established to galvanize 2.6 billion Euro of investments, delivering projects geared to developing sovereign and trustworthy AI, Data and Robotics technologies and working with industry to deploy robust, safe and explainable innovation.

Dr. Maximilian Poretschkin heads the AI Assurance and Assessment department at Fraunhofer IAIS, where he advises companies and public authorities worldwide on trustworthy AI. His research interests include the operationalization of legal requirements in computer science, the development of AI testing methods, criteria, and tools, and the creation of AI governance frameworks. He has published one of the first AI testing catalogs and plays a leading role in many national projects for the development of AI testing standards, including ZERTIFIZIERTE KI, MISSION KI, and the Zentrum für vertrauenswürdige KI (zvki).

Dr. Nicolas Rebierre serves as president of the European Trustworthy AI Association, a non-profit association with the mission to empower engineers with state of the art, open-source methodology and tools to build trustworthy AI systems. Nicolas brings extensive experience from the technology industry. His background includes various roles in engineering, product management, open-source program offices, ventures, industrial research, and leadership.

Dr. Mathias Sander is Senior AI Certification Manager at TÜV AI.Lab, a joint venture founded by some of the largest and most important companies in the certification industry in Germany and beyond. The TÜV AI.Lab is paving the way for trustworthy AI by developing conformity criteria and test methods for AI systems. Dr. Mathias Sander leads the certification work package in the European research project TEF Health. Previously he was responsible for the regulatory and clinical strategy for AI-based software as a medical device in a startup company. His focus was on the implementation of quality and information security management systems, as well as on regulatory aspects related to AI, software, and cybersecurity in the MedTech environment.

Sarah Sterz is a research assistant at Saarland University in Prof. Holger Hermanns’ Chair of Reliable Systems and Software. There she works at the interface between computer science and philosophy, focusing primarily on topics related to computer ethics. She is involved in teaching as a lecturer for the course “Ethics for Nerds,” in which she teaches philosophical and ethical fundamentals to computer scientists.
Target Audience
This event is open to professionals and stakeholders in:
- Artificial Intelligence research and development
- Public policy and regulatory bodies
- Private sector and startups
- Non-profits and academia
Registration
Participation is free of charge, but registration is required.