This scholarly article introduces refusal as an architectural invariant in intelligent systems, arguing that safety mechanisms implemented above planning and optimization layers are structurally fragile. Rather than treating refusal as a learned behavior or policy response, the work defines refusal as a system-level constraint that renders certain actions unreachable under all optimization regimes. The article provides a formal invariant, explains why policy-based refusal collapses under reward pressure, and reframes AI governance around testable, falsifiable constraints. Intended for AI safety researchers, system architects, and governance stakeholders, this work establishes refusal as a foundational design primitive for accountable, audit-ready intelligent systems.