A new AISoLA contribution by CERTAIN researcher Dr. Kevin Baum and Andre Steingrüber probes the normative side of AI alignment, asking if we should defer to experts (epistocracy) or empower stakeholders (democracy). The paper highlights that normative and metanormative uncertainty leave a justificatory gap that practical, and especially political, justification must fill. It distinguishes between two kinds of practical justification, namely outcome-oriented (instrumental) and legitimacy-oriented (non-instrumental). The paper concludes that successfully defending democratic procedures for AI alignments is harder than commonly take to be and that purely epistocratic or purely democratic paths fall short. Thus it propose to combine hybrid frameworks with expert judgment, participatory input and institutional safeguards in order to prevent monopolization and illegitimate coercion.
For CERTAIN this paper connects directly to our guarantees by design, by tools, by transparency, and by interaction through engineering aligned systems while grounding them in legitimate human oversight.