Although much of CERTAIN’s current research is technical in nature, we firmly believe that trustworthy AI requires more than just computer science, namely issues such as hybrid intelligence, human oversight, agency, and the very concept of trust itself. Legal, ethical, empirical, and societal insights are indispensable. Normative frameworks—drawn from legal scholarship, ethics, and regulatory initiatives like the EU AI Act—must guide development. Ethical theorizing must be the core of normative, including specifically moral, judgements. Empirical research is needed to understand how AI systems interact with social institutions and public perceptions. Only through interdisciplinary collaboration can we ensure that AI is well-governed, and well-justified trust is facilitated—not just in AI technologies, but in the social and institutional processes they increasingly shape. We are convinced that the interlocking of technical approaches, normative requirements, and human-centered empirical methods is key for the development of trustworthy AI systems.
Contact: Kevin Baum, André Meyer-Vitali