Trump administration to test new AI models from Google, Microsoft, xAI before public release
The Trump administration is weighing an executive order to form a government AI working group with a model-review process. White House officials briefed Google, Anthropic, and OpenAI about the plans last week. Separately, Microsoft, Google and xAI agreed to give the U.S. government early access to new AI models for national security testing, with CAISI evaluating risks and a July 2025 pledge cited.
Why It Matters
The moves show the administration seeking oversight of AI development amid rising security concerns, outlining steps to vet models before broad deployment.
Timeline
7 Events
Growing Washington concern about national security risks of AI
Officials cite rising concern in Washington over the national security risks posed by powerful AI systems as early access testing aims to identify threats before deployment.
Microsoft to test AI systems with government scientists; UK agreement noted
Microsoft said it will work with US government scientists to test AI systems 'in ways that probe unexpected behaviours'; Microsoft also has a similar agreement with the UK’s AI Security Institute.
US government allowed early access to AI models for national security testing
Microsoft, Google and Elon Musk’s xAI agreed to provide early access to new AI models for national security testing; CAISI would evaluate the models prior to deployment and research their capabilities and security risks.
White House briefings with Google, Anthropic and OpenAI
White House officials told executives from Anthropic PBC, Alphabet Inc.’s Google and OpenAI about some of the plans during meetings last week.
Executive order consideration: creating AI working group and model-review process
The White House is considering an executive order to create a government AI working group, including a formal process for reviewing new AI models.
CAISI statement on pact to evaluate models before deployment
The Centre for AI Standards and Innovation at the Department of Commerce said the pact would allow it to evaluate the models before deployment and conduct research to assess capabilities and security risks; it also references the July 2025 pledge.
Pledge to vet AI models for national security risks (July 2025)
The article notes a pledge by the Trump administration in July 2025 to partner with technology companies to vet their AI models for national security risks.