The Department of War on May 1, 2026 announced agreements with eight commercial AI firms to deploy their models on Pentagon classified networks — Impact Level 6 (Secret) and Impact Level 7 (Top Secret/SCI). The initial press release named seven companies; the Pentagon CTO's office added Oracle to the list later the same day. The full roster: Amazon Web Services, Google, Microsoft, OpenAI, SpaceX, NVIDIA, Reflection AI, and Oracle. Breaking Defense first reported the structure of the agreements; NextGov/FCW provided customer-facing operational detail.
The full list and what each firm brings
| Company | Primary AI Capability | Prior IL Status | Notes |
|---|---|---|---|
| Amazon Web Services | Bedrock foundation model platform, Titan models | IL6 (GovCloud) | Infrastructure + model hosting |
| Gemini models, Vertex AI platform | IL4/IL5 via Google Public Sector | Expanding from commercial to classified | |
| Microsoft | Azure OpenAI Service, Copilot, GPT-4o | IL6 (Azure Government) | Deepest existing classified footprint |
| OpenAI | GPT-4o, o3, reasoning models | New to classified work via Microsoft | API access through Azure Government channel |
| SpaceX | Grok (via xAI partnership), Starshield network | New classified classification | Non-traditional; brings comms + AI together |
| NVIDIA | NIM inference platform, GPU compute | Present via hardware; now software/IL | Inference infrastructure, not models |
| Reflection AI | Reflection 70B (open-weight frontier model) | New entrant to defense AI | Signals Pentagon openness to non-GPT-4 models |
| Oracle | OCI AI platform, select third-party models | IL5 (OCI for Government) | Added day-of; FedRAMP High authorized |
Why "deliberate redundancy" — and what it means in practice
Under Secretary for Research & Engineering and Pentagon CTO Emil Michael addressed the multi-vendor structure directly in the May 1 briefing, per Breaking Defense: "What we've learned since we started this effort at the Department of War is that it's irresponsible to be reliant on any one partner." The statement is a pointed departure from the JEDI-era single-vendor cloud procurement model, which produced years of litigation, delayed cloud adoption, and ultimately collapsed into the multi-award JWCC vehicle.
The practical implication is that program managers and systems integrators can now choose among multiple IL6/IL7-authorized model providers for a given AI application — picking based on model performance, cost, latency, and mission fit rather than being constrained to whichever vendor won the last enterprise contract. For capability development teams building AI-enabled ISR, logistics, cyber, or decision-support tools, the multi-vendor architecture enables the kind of rapid model benchmarking and A/B evaluation that was previously impossible in the classified environment.
The Anthropic exclusion: procurement artifact or deliberate signal?
Anthropic's absence from the list is the most commercially significant element of the announcement. Anthropic had previously been operating Claude models on IL6 networks under a separate arrangement with CDAO (the Chief Digital and Artificial Intelligence Office). The public explanation offered by Pentagon officials — that some of the eight firms are "still finalizing operational and security paperwork" — did not specifically address Anthropic's status.
Industry reporting and subsequent White House policy drafts (see related GovConFeed story: White House Moves to Strip AI Vendors' Veto Power) point to a usage-restriction dispute: Anthropic's Acceptable Use Policy contains provisions that conflicted with certain Pentagon applications the DoD sought to authorize. When Anthropic declined to modify its terms for classified government use, the Pentagon designated the company a supply chain risk under the DoD AI Ethics Principles framework. Whether Anthropic can re-enter the classified AI market requires either a modification to its enterprise government terms of service or a specific waiver process that does not yet exist in published policy.
Reflection AI: the most notable inclusion
Reflection AI is the least-known name on the list and potentially the most strategically significant. Founded in 2024 by former Google DeepMind researchers, Reflection AI released Reflection 70B — a 70-billion parameter model trained with a novel self-reflection technique that showed competitive benchmark performance against much larger proprietary models. Its inclusion alongside OpenAI and Google signals two things: first, that the Pentagon is deliberately maintaining optionality against concentration in any single model architecture; second, that open-weight models running on government-operated infrastructure (rather than vendor-hosted inference) are a viable path to IL7 authorization.
What this means for contractors — the full implications
The contracting implications extend across multiple tiers of the defense industry:
- Large systems integrators (Booz Allen, SAIC, Leidos, CACI, ManTech, Peraton): Eight IL6/IL7 model APIs are now available to build against. Expect aggressive pursuit of AI-enabled task orders in ISR, logistics, cyber, and command-and-control domains where these primes already hold access vehicles. The model selection question — which AI for which mission — becomes a core differentiator in proposal development.
- Mid-tier IT contractors: IL6 ATO (Authorization to Operate) is now the minimum table-stakes requirement for meaningful AI work at the classified tier. Firms operating at IL4 or IL5 face a competitive gap; the investment in obtaining IL6 ATO for a specific product or service has become significantly more justified.
- AI model evaluation and assurance firms: DoD will need independent third-party evaluation of model performance on classified tasks, bias testing, adversarial robustness assessment, and ongoing monitoring. Red-team services for LLMs operating on sensitive data are an emerging specialty with no established incumbent base.
- Data pipeline and MLOps contractors: Getting a model authorized at IL6/IL7 is step one; integrating it into operational workflows requires secure data pipelines, fine-tuning infrastructure, and model lifecycle management. These engineering services generate substantial task-order revenue downstream of the model authorization.