What we do

ResearchOne of the goals of CERTAIN is to conduct excellent research on diverse aspects of trustworthy AI. Several doctoral students and researchers work under the umbrella of CERTAIN, on topics like safe and trustworthy reinforcement learning, interpretability and transparency of large language models, or guardrails for large AI models. Research in CERTAIN specifically also spans interdisciplinary topics such as AI ethics and effective and transparent human oversight, interdisciplinary aspects of AI alignment, and more.


DefragmentationBuilding trustworthy AI and measuring the trustworthiness of AI systems has emerged as one of the most pressing topics for AI research – especially due to boom of generative AI in recent years. Consequently, trustworthiness research tends to be heterogeneous, having 1) many adjacent research fields and 2) often unclear objectives and processes. One of the goals of CERTAIN is to defragment the heterogeneous landscape of AI trustworthiness research by publications, standardization proposals and communications to and with its members. This specifically also includes network building for stakeholders in the European Union for institutes, companies, and other entities with diverse background and views on AI usage and benchmarking.


CommunicationCERTAIN aims to bridge the gap between academic AI trustworthiness research and the concrete need of industries for trustworthy systems. We aim to do so by connecting industry partners, that are members of the CERTAIN network, with relevant academic stakeholders; and by fostering an environment for applied research on trustworthiness aspects, which can be transferred and adapted to real-world problems that the AI using industry is facing.


ApplicationFinally, CERTAIN also serves a disseminative purpose by communicating the latest news, calls, collaboration opportunities, and research outcomes around trustworthy AI to the CERTAIN network. We also perform matchmaking between the CERTAIN network partners, helping them to find complementary expertise, for example when building a project consortium or applying for funding otherwise.

Topics

To achieve these objectives, we employ a wide range of methodologies, including causal inference, neuro-explicit modeling, mechanistic interpretability, and many more. Our interdisciplinary approach enables us to develop innovative yet practical solutions across different domains, driving the advancement of trustworthy AI systems. We are committed to responsible AI development and deployment, ensuring that AI is designed and implemented in a way that supports autonomy, human oversight, responsible decision-making, and many other essential aspects of ethical and sustainable AI.

Projects

Research