The General Services Administration published a proposed rule in the Federal Register on May 14, 2026 that would add General Services Administration Regulation clause 552.239-7001, "Contractor Use of Artificial Intelligence in Contract Performance," to covered GSA contracts for professional services, IT support, research and analysis, and advisory services. The proposed clause would require contractors to: disclose in writing to the contracting officer when any contract deliverable or analytical recommendation includes content generated by artificial intelligence systems; identify the specific AI tools used and their version information; maintain records of AI system usage for a minimum of three years; and certify that a qualified human with relevant expertise reviewed all AI-generated outputs before submission as a contract deliverable. The proposed rule follows GSA's AI policy framework published in early 2026 and is intended to create a baseline accountability standard for AI use in government service delivery that can be monitored and enforced through contract administration rather than relying solely on voluntary disclosure.
Scope and Covered Contract Categories
The proposed clause would apply to GSA contracts in four broad service categories: professional services including management consulting, financial advisory, and policy analysis; IT professional services including software development support and systems integration advisory; research, development, and analysis services; and administrative support services that include document preparation, communications, or analytical work products. The clause explicitly does not apply to IT platform contracts where AI is built into the technical architecture of a system being developed — those disclosures are handled through separate technical documentation requirements — but rather to service delivery contracts where human expertise is the primary value being procured and AI tools are used to augment that expertise. The distinction tracks the practical reality of current contractor AI use: a management consultant using large language model tools to draft briefing materials is in a different category than a software developer building an AI-enabled application, and the proposed clause addresses the former without disrupting the latter. GSA's proposed rule acknowledges that many contractors are already using AI tools across their professional services work and that the clause is intended to make that use transparent and documented, not to prohibit it.
Industry Response and Implementation Challenges
The proposed rule has drawn a mixed response from industry. Large professional services contractors that have invested in AI capability and are using it at scale to improve service delivery efficiency generally support disclosure requirements as a way of distinguishing their thoughtful AI governance from lower-cost competitors that may use AI tools without adequate human review. Smaller contractors express concern about the administrative burden of the disclosure and record-keeping requirements, particularly for task orders where AI tools are used as general productivity aids — word processing, citation lookup, research summarization — in ways that are difficult to distinguish from AI use that substantively shapes a deliverable. GSA's proposed rule invites public comment on several questions: the appropriate threshold above which AI use should trigger disclosure; whether particular AI tool categories should be exempt; and how the human review certification should be structured to ensure it represents a genuine quality control step rather than a formality. The comment period closes in August 2026.
What It Means for Contractors
GSA's proposed AI clause signals a broader federal trajectory toward mandatory AI use disclosure on government service contracts, and contractors should begin developing internal policies now regardless of whether the final rule mirrors the proposal.
- GSA contractors in the covered service categories should inventory their current AI tool usage, identify which tools and use cases would be covered by the proposed disclosure requirement, and assess whether their current documentation practices are sufficient to support the record-keeping requirements; this assessment is valuable regardless of the final rule's specific form.
- The human review certification is the most substantive compliance obligation in the proposed clause and the one most likely to generate enforcement attention; contractors should ensure that their AI governance policies define what constitutes a "qualified human" reviewer and document the review process in a way that would survive a contracting officer's inquiry.
- GSA's Alliant 3 vehicle, the OASIS+ professional services contracts, and other large GWAC vehicles are likely early candidates for clause application; contractors on these vehicles should submit comments on the proposed rule and engage GSA's acquisition policy office directly to shape the final clause language before it is inserted into active vehicles.
- The proposed clause will likely be a template for similar requirements from DoD, DHS, and other large agencies; getting ahead of AI governance documentation practices now positions contractors to satisfy multiple agency requirements with a single internal policy framework rather than building separate compliance programs for each agency.
Liability Implications of AI Output Certification
The proposed GSAR clause's human review certification requirement — certifying that a qualified human with relevant expertise reviewed all AI-generated outputs before submission — creates a legal obligation that has implications beyond the administrative compliance requirement. Under the False Claims Act, a certification in a contract deliverable submission that proves false — meaning that the review did not actually occur or was not conducted by a qualified reviewer — is a potential false claim if the deliverable was submitted for payment. The practical risk is that a contractor whose AI use policies require human review but whose quality control processes do not reliably enforce that review may be submitting deliverables with a certification that is technically false. This risk is not hypothetical: the history of government contract FCA enforcement includes numerous cases where companies had formally compliant policies but informal practices that deviated from those policies, and the deviation was the basis for FCA liability. The certification requirement in the proposed clause forces contractors to make their AI governance processes legally accountable in a way that voluntary AI use policies do not. Law firms advising defense and civilian contractors on AI governance have begun recommending that companies treat the proposed clause as a template for internal policy design regardless of its current proposed status, on the theory that the certification standard it embodies will become the de facto compliance baseline across multiple agencies as similar clauses proliferate. Building audit trails and documentation practices that can support the certification before it becomes mandatory is more efficient than redesigning AI workflows under time pressure after the clause is finalized.