The Trump administration is circulating draft policy language that would prevent AI contractors from imposing usage restrictions on federal agencies — a direct response to the standoff between Anthropic and the Department of War that ended with the Pentagon designating Anthropic a supply chain risk. Reported by Nextgov/FCW and Government Executive on May 5–6, 2026, the draft language represents the most aggressive government assertion of technology usage rights since the JEDI cloud contract debates of the early 2020s.
What the draft language says
According to Nextgov/FCW's reporting, the draft establishes a foundational principle: "It is for the democratically elected government to determine what is a lawful and appropriate use of a particular technology, not solely a company." The practical effect, if enacted, would be to render unenforceable any provision in an AI vendor's terms of service, contract, or acceptable use policy that restricts a federal agency from using that vendor's technology for a government-determined lawful purpose.
The Anthropic backstory
The proximate cause of this policy push is well-documented. Anthropic, the AI safety company behind the Claude models, maintained usage policy restrictions that conflicted with certain Pentagon applications. When the Department of War sought to expand Claude's deployment on classified networks for operational use cases that Anthropic's policies did not permit, the company declined to modify its terms. The Pentagon responded by removing Anthropic from its approved vendor list for classified AI deployments and reportedly flagging the company as a potential supply chain concern — a designation that effectively bars it from competing for the most sensitive DoD AI work.
The broader lesson the administration drew was that any AI vendor's commercial terms of service can, in theory, become a veto over government operations. That is unacceptable to an administration that has moved aggressively to consolidate executive authority over technology policy.
The Commerce Department parallel: CAISI renegotiation
Separately but simultaneously, the administration is renegotiating the Consortium for AI Safety and Innovation (CAISI) agreements at the Commerce Department — the framework under which AI labs received access to classified threat data for safety testing. Nextgov reports that Google DeepMind, Microsoft, and Elon Musk's xAI are being brought into a renegotiated framework, while Anthropic remains outside. The CAISI access is valuable to AI labs both commercially (for safety research) and reputationally. Its effective use as leverage in a contractual dispute with DoD is a significant precedent.
What this means for AI contractors
Any company selling AI tools, platforms, or models to the federal government needs to assess its existing contracts and standard terms now:
- Review your AUP/ToS for government carve-outs — if your acceptable use policy restricts federal agencies from using your technology for law enforcement, military, surveillance, or national security purposes, you may need to restructure your government contracting terms before this policy takes effect
- Separate commercial and government ToS — the cleanest solution for dual-market AI companies is distinct terms of service for federal government customers, with usage restrictions that comply with applicable federal law rather than company policy
- The FedRAMP implication — FedRAMP authorization does not currently require vendors to waive usage restrictions. If this policy is formalized, FedRAMP requirements may be updated to mandate it
- Anthropic is the cautionary tale — losing IL6/IL7 access and CAISI participation represents tens of millions in potential government revenue annually. No AI company wants to repeat that outcome
Timeline and status
As of May 7, 2026, the language is draft — not yet an executive order or formal OMB guidance. The administration has not announced a comment period or effective date. Given the pace of executive action in this administration, firms should treat this as a 60–90 day horizon and begin internal reviews now rather than waiting for a final rule.