As artificial intelligence systems acquire advanced reasoning, planning, and world-modeling capabilities, the dominant risk shifts from incorrect prediction to unauthorized assertion and execution. Existing approaches—such as alignment objectives, interpretability, and deliberative reasoning—improve correctness but do not define epistemic jurisdiction: the permission boundary governing what an AI system is allowed to assert or do on behalf of an institution.
This paper introduces ARC-S (Architectural Risk Control System), a system-level authority control architecture that prevents AI systems from producing assertions or behaviors that exceed defined jurisdictional authority. ARC-S constrains execution paths through enforceable invariants, pre-generation classification, refusal-first failure modes, and auditable control flows. The architecture operates independently of model capability, optimization strategy, or internal reasoning depth and fails closed under epistemic ambiguity.
ARC-S does not improve what AI systems can know; it constrains what they are permitted to assert, decide, or execute. As AI capability scales, authority—not intelligence—becomes the primary governance problem. This work presents ARC-S as foundational infrastructure for liability-aware deployment of high-capability AI systems.