Anthropic Ban in U.S. Federal Agencies – GSA Removes Claude, Pentagon Flags Supply-Chain Risk

The Anthropic ban in U.S. federal agencies temporarily removes Anthropic from key procurement and usage channels: GSA is pulling Anthropic from USAi.gov and the Multiple Award Schedule, while the Pentagon is simultaneously announcing a “supply chain risk” designation.

Anthropic ban in U.S. federal agencies puts Anthropic under pressure

In Washington, a dispute over the boundaries of AI use in national security is escalating: On February 27, 2026, U.S. President Donald Trump ordered U.S. federal agencies to cease using Anthropic technology “immediately” — including a transition period of up to six months for organizations that have already integrated Anthropic deeply into their workflows. The move is accompanied by procurement actions by the General Services Administration (GSA) and a parallel announcement of a “supply chain risk” designation within the defense establishment.

This is not a classic case of a compromised software supply chain; rather, it is a state-level risk and control decision. Even so, the outcome resembles a vendor cut: systems, contracts, and data flows must be reassessed within days.

What the Anthropic ban in U.S. federal agencies specifically means

So far, the most concrete, publicly traceable implementation is coming through procurement channels. The GSA states that it is removing Anthropic from USAi.gov as well as from the Multiple Award Schedule (MAS) — the central contracting and purchasing vehicle for many commercial products and services across the U.S. federal government. As a result, the “availability” of Anthropic offerings for agencies is being sharply curtailed in practice: new awards, renewals, and standardized ordering paths become more difficult or impossible.

In parallel, Reuters describes the presidential instruction as a government-wide “phase-out” of Anthropic technology. Reporting emphasizes that the defense department (referred to by this administration as the “Department of War”) and other entities are expected to receive a transition window of up to six months — coupled with the political expectation that Anthropic should actively support the transition.

Is the Anthropic ban in U.S. federal agencies a legal prohibition, or “only” an instruction?

In practice, the effect for U.S. federal agencies is close to a ban: if GSA cuts off procurement and platform channels and the executive branch halts usage, agencies are left with rollback and replacement. At the same time, it is notable that the publicly cited documents and statements do not foreground a clearly identified Executive Order number or an OMB memo with a docket reference — the measure is currently visible primarily through announcements and procurement levers.

A “hard” statutory prohibition is different from an executive instruction plus contract/procurement steering. For affected organizations, however, operational consequences matter more than legal semantics: projects using Claude in classification chains, development processes, SOC workflows, or knowledge management must move quickly to alternatives.

What is a “Directive” in the context of the Anthropic ban?

In the GSA release, “directive” is not a standalone document type exhaustively defined in U.S. law; rather, it is an umbrella term for a presidential instruction to the executive branch (U.S. federal agencies), which may be implemented in forms such as an Executive Order, a Presidential Memorandum, or another presidential instrument. Whether such a “directive” is publicly “legally effective” beyond internal executive management depends, among other things, on whether it must be published as a document with “general applicability and legal effect”: the Federal Register Act requires publication of certain presidential documents in the Federal Register (e.g., Executive Orders/proclamations and other documents with general applicability and legal effect) under 44 U.S.C. § 1505; publication generally serves as sufficient notice (“constructive notice”) under 44 U.S.C. § 1507.

Rationale: security concerns and a dispute over usage boundaries

The U.S. government publicly justifies the move with national security interests and the position that U.S. law — not private terms of service — should define how AI is used in defense contexts. Anthropic, by contrast, frames the dispute as a failed negotiation over two narrow exceptions the company did not want to permit even under “lawful use”: mass surveillance of U.S. citizens and fully autonomous weapons.

In its official statement, Anthropic says it maintained these exceptions for two reasons: first, today’s frontier models are not reliable enough for fully autonomous weapon systems; second, large-scale domestic surveillance violates civil liberties. The company says it will challenge any “supply chain risk” designation in court (Anthropic statement dated February 27, 2026).

Supply chain risk: which legal bases may be relevant to the Anthropic ban in U.S. federal agencies

Within the defense department, U.S. procurement law provides mechanisms to mitigate “supply chain risk” through concrete procurement actions. A central reference point is 10 U.S.C. § 3252, which allows “covered procurement actions” to reduce supply chain risk in the context of covered systems/procurements. In practice, that can translate into excluding sources, restricting subcontracting, or imposing contractual requirements.

The flashpoint — and where it becomes legally contentious — is the asserted scope. Anthropic argues that a supply chain risk designation under § 3252 could only reach use within “Department of War” contracts, not broadly prohibit “any commercial activity” between a contractor and Anthropic outside that contract context. That question is likely to be clarified in potential litigation and in internal implementation documents (contracting guidance, class deviations, ATO updates).

Which objective infosec evidence Anthropic can point to

Alongside the political dispute, it is worth looking at auditable security and compliance artifacts. For its commercial products (including Claude for Work and the Anthropic API), Anthropic lists in its publicly accessible certifications overview:

The related evidence documents (e.g., certificates/reports) are provided via the Trust Center. For U.S. government workloads delivered via third-party platforms, Anthropic also notes that Claude on Amazon Bedrock in AWS GovCloud (US) regions is authorized for FedRAMP High as well as DoD Impact Level 4 and 5 workloads (Anthropic announcement); AWS further documents in the GovCloud user guide that, in this context, models such as Claude Sonnet 4.5, Claude 3.7 Sonnet, Claude 3.5 Sonnet v1, and Claude 3 Haiku are listed as FedRAMP- and IL4/5-authorized (AWS documentation).

Category: News
Previous Post
Critical Cisco SD-WAN vulnerability CVE-2026-20127 actively exploited
Unser Newsletter

Abonnieren und keine Inhalte mehr verpassen

[mc4wp_form id=”730″]

Unser Newsletter

Abonnieren und keine Inhalte mehr verpassen

[mc4wp_form id=”730″]

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Das könnte noch interessant sein