Topic: AI Safety, Alignment, Machine Ethics

As AI systems become more capable and autonomous, their behaviour can become opaque, unpredictable, and difficult to control. This increases the risk of unintended, harmful, or misaligned outputs, whether due to technical limitations, misuse, or a mismatch with human goals. Ensuring that these systems are safe—without causing unintended or unnecessary harm—and are meeting broader normative and moral expectations is a crucial and worthwhile goal. Depending on the system and its application, the relevant challenges fall under the domains of AI Safety, (AI/Value) Alignment, or Machine Ethics. These fields combine normative insight with technical rigor and are central to building trustworthy AI systems that deserve and sustain our trust.

Contact: Kevin Baum, Patrick Schramowski