Frontier AI Safety Policies concentrate on prevention -- capability evaluations, deployment gates, and usage constraints -- while neglecting institutional capacity to coordinate responses when prevention fails. We argue that this coordination gap is structural: investments in ecosystem robustness yield diffuse benefits but concentrated costs, generating systematic underinvestment. Drawing on risk regimes in nuclear safety, pandemic preparedness, and critical infrastructure, we propose that similar mechanisms -- precommitment, shared protocols, and standing coordination venues -- could be adapted to frontier AI governance. Without such architecture, institutions cannot learn from failures at the pace of relevance.