MythosWatch

Evaluated · Government / United States

Center for AI Standards and Innovation (CAISI)

CAISI is the primary US government body evaluating unreleased frontier AI models including Mythos; its expansion to cover Google, Microsoft, and xAI — triggered by Mythos — positions it as the institutional anchor for a potential mandatory pre-deployment AI safety review framework.

AI model evaluationnational security testingfrontier AI governancepre-deployment review

Entity log

High impactEvaluated

NIST/Bloomberg: CAISI signs new pre-deployment AI security testing agreements with Google DeepMind, Microsoft, and xAI; White House studying executive order requiring AI model safety reviews, directly prompted by Mythos

The Center for AI Standards and Innovation (CAISI), a NIST division within the Commerce Department, announced on May 5 that it signed new pre-deployment AI safety evaluation agreements with Google DeepMind, Microsoft, and xAI — building on prior agreements with Anthropic and OpenAI from 2024. The agreements allow CAISI to evaluate AI models before public release in classified environments, potentially with reduced or removed safeguards, and include post-deployment research and information-sharing. CAISI director Chris Fall stated the center had completed more than 40 pre-deployment evaluations including unreleased state-of-the-art models. Separately, White House National Economic Council Director Kevin Hassett confirmed the administration is studying 'possibly an executive order' to create a clear roadmap for evaluating advanced AI systems before release.

The CAISI expansion and potential executive order are the US government's most concrete formal institutional response to Mythos-class capability: CAISI's pre-existing Anthropic agreement is being extended across the major AI labs, and a possible mandatory pre-deployment review framework would formalize what Mythos made urgent.