Google, Amazon, and Microsoft have expressed cooperation outside of defense-related matters regarding Anthropic, which the Department of Defense designated as a supply chain risk, while a Department of Defense AI official explains why it was designated

Following a dispute with the US Department of Defense over the military use of AI, leading to Anthropic being designated a 'supply chain risk,' several tech companies have announced that they will continue to partner with the company.
Microsoft says Anthropic's products remain available to customers after Pentagon blacklist
While Anthropic offered custom models of its AI, 'Claude,' to government agencies, it explicitly prohibited 'mass surveillance of citizens' and 'development of fully autonomous weapons.' The Department of Defense approached Anthropic about relaxing the restrictions, but Anthropic refused, forcing the Department to choose between terminating the relationship or lifting the restrictions. Ultimately, Anthropic persisted , leading the Department of Defense to choose to terminate the relationship and, in addition, to retaliate by designating Anthropic as a 'supply chain risk,' a designation that had never been applied to domestic companies, restricting its use by government agencies.
AI company Anthropic officially designated as a 'supply chain risk to US national security,' Anthropic vows legal battle - GIGAZINE

The Department of Defense designated Anthropic as a supply chain risk, meaning that 'any contractor, supplier, or partner doing business with the military may not engage in any commercial activity with Anthropic.' However, Microsoft, Google, and Amazon have stated that they will continue to partner with Anthropic for other business purposes.
For example, Microsoft has integrated Claude as an add-on to its tools and is providing it to various government agencies. Microsoft is not excluding Claude, and issued a statement through a spokesperson saying, 'After review by our legal counsel, we have concluded that Anthropic's products, including Claude, will remain available to our non-DoD customers through Microsoft 365, GitHub, and other platforms, and we will continue to collaborate with Anthropic on non-defense-related projects.'
Google also said, 'We understand that the Department of Defense's decision does not prevent us from collaborating with Anthropic on non-defense-related projects, and their products will remain available through our platforms, including Google Cloud.' Amazon said , 'Amazon Web Services customers and partners can continue to use Claude for all non-DoD-related workloads.'
Noah Zweben, project manager for Claude Code at Anthropic, said, 'Tough times show you who your real friends are. Thank you.' He cited Microsoft, Amazon, and Google.
Tough times show you who your friends are. Thank you @Microsoft @amazon and @Google https://t.co/LJVQFCFqA1
— Noah Zweben (@noahzweben) March 6, 2026
By the way, Emil Michael, the Department of Defense's AI chief, provides some important facts about why the Department of Defense went so far as to designate Anthropic as a supply chain risk, why it disliked Anthropic's safeguards, and why it called for 'all lawful uses of AI.'
Inside the Culture Clash That Tore Apart the Pentagon's Anthropic Deal
https://www.piratewires.com/p/inside-pentagon-anthropic-deal-culture-clash
According to Michael, Anthropic's AI terms of use were complicated, and he found himself in a situation where he had to ask Anthropic for permission to use the AI every time.
For example, Michael envisioned a scenario in which a swarm of drones was attacking, but humans had no way to defend themselves, and an AI-powered defense system could handle it. Anthropic apparently approved the use of drones in this situation.
The second situation was, 'A hypersonic missile is approaching, and we have to act within 90 seconds, and only AI can handle it.' In response, Anthropic said, 'There may be exceptions, but please call us anytime.' Of course, if such a situation actually occurred, there was no way to leisurely confirm it by phone.
Michael repeatedly considered this scenario, but Anthropic always maintained that any use outside the scope of its terms of service should be left to Anthropic's discretion. Therefore, the Department of Defense stated that it could not consider all exceptions and asked Anthropic to allow 'all lawful uses.'

Michael also questioned Anthropic's claim that it was 'arbitrary' to focus on 'mass surveillance of the public and the development of fully autonomous weapons' and that it had disagreements with the Department of Defense on this point.
The Department of Defense claims it has been labeled as if it does want mass surveillance, even though it doesn't. 'We're not the FBI or the Department of Homeland Security, and mass surveillance is not our job,' Michael said. He added, 'Isn't it Anthropic's job to scrape data from the internet on a massive scale to build profiles of individuals?'
However, Michael did not rule out the development of fully autonomous weapons. He mentioned the military's 'Golden Dome concept,' which envisions a system that uses AI to detect and shoot down objects traveling at five times the speed of sound from outer space. In this regard, fully autonomous AI is essential, and Anthropic agreed.
Michael says he became convinced that this was an intelligence operation after repeated media reports suggested the Pentagon was pursuing horrific goals of mass surveillance of the public and the development of fully autonomous weapons.

In addition, Michael found it 'troubling' that Anthropic repeatedly brought the matter up to its own 'political bureau' and ethics committee whenever they had a discussion with the Department of Defense.
The most dramatic incident occurred when, just as negotiations appeared to be falling apart, Anthropic unilaterally posted a blog post, without any prior notice, stating, 'Negotiations with the Department of Defense have broken down. We cannot allow any lawful use of AI.' According to Michael, this blog post, which appears to have been posted during negotiations, infuriated President Donald Trump and Secretary of Defense Pete Hegseth. The following day, President Trump ordered all federal agencies to phase out the use of Anthropic's technology over a six-month period. On March 6, 2026, he designated Anthropic as a supply chain risk.
Anthropic CEO Dario Amodei rejects Pentagon request over AI security issues - GIGAZINE

'There was a real fear that Anthropic's AI could be programmed to fit Anthropic's own constitution, psyche, and policy preferences,' Michael said. 'For example, what if Anthropic incorporated its own policy goals and ideology into the model and made it behave like, 'Don't activate the laser missiles on people you don't want to die.' That's a supply chain risk.'
One expert points out that the Department of Defense's pressure on AI companies is intended to maintain a 'monopoly on the use of force,' as AI is expected to become a weapon equivalent to a nuclear weapon, and is therefore essential for the existence and basic functions of nation-states.
If AI is a weapon, why don't we regulate it like one?
https://www.noahpinion.blog/p/if-ai-is-a-weapon-why-dont-we-regulate
Related Posts:
in Note, Posted by log1p_kr






