OpenAI reaches AI contract with Department of Defense, shortly after Anthropic negotiations collapse



Following the breakdown in negotiations between the Department of Defense and Anthropic, the developer of Claude, President Donald Trump ordered federal agencies to designate Anthropic as a supply chain risk and halt use of its technology. Shortly after, OpenAI announced it had reached an agreement to deploy advanced AI systems on classified Department of Defense networks.

Our agreement with the Department of War | OpenAI
https://openai.com/index/our-agreement-with-the-department-of-war/

OpenAI reveals more details about its agreement with the Pentagon | TechCrunch
https://techcrunch.com/2026/03/01/openai-shares-more-details-about-its-agreement-with-the-pentagon/

The Department of Defense, which has a contract with Anthropic for the military use of AI, is reportedly pressuring the company to remove safeguards it has put in place. In response, Anthropic's Executive Officer Dario Amodei issued a statement saying, 'We will not succumb to intimidation.'

Anthropic CEO Dario Amodei rejects Pentagon request over AI security issues - GIGAZINE


by Fortune Brainstorm Tech

Secretary of Defense Pete Hegseth met with Amodei and warned him, 'If you don't lift the restrictions on Claude at the request of the military, we will either force you to remove the restrictions under the law or terminate the contract and sever our relationship.'

Defense Secretary Hegseth warns Anthropic to either lift Claude's restrictions or cut ties - GIGAZINE




OpenAI's partnership with the Department of Defense comes on the heels of failed government negotiations with Anthropic, with President Donald Trump ordering federal agencies to suspend use of Anthropic's technology and designating the company a supply chain risk.

President Trump claims that Anthropic's left-wing fanatics have attempted to control the U.S. military, and orders the severance of ties. Secretary of Defense Hegseth will designate Anthropic as a 'supply chain risk' - GIGAZINE



Meanwhile, OpenAI reached an agreement that maintained substantially the same safety standards as Anthropic, and stressed that it shares the same safety redlines as Anthropic, arguing that this contract offers stronger protections than any previous agreement by fully preserving its own safety stack.

OpenAI has set three key 'boundaries' as its top priorities in this agreement: 'No use of AI technology for public surveillance in the United States,' 'No direct control of autonomous weapons systems involving the use of lethal force,' and 'No use of AI technology for advanced automated decision-making that significantly affects individual rights, such as social credit systems.'

To technically secure this boundary, OpenAI employs a unique deployment architecture that provides models exclusively through cloud environments. This prohibits direct installation of AI models on edge devices such as aircraft or drones, preventing integration into physical weapon systems. This allows for independent verification that the safety stack is functioning properly and for classifiers to be continually updated as needed.

In terms of operations, OpenAI has established a system in which technical experts and safety and alignment researchers with confidentiality certifications are directly involved in the process. To prevent the AI from making fatal decisions autonomously, a system is maintained in which humans always remain in the decision-making loop. OpenAI claims that taking such multi-layered technical and human safeguards, rather than simply written terms of use, is the best way to prevent inappropriate use.

The agreement includes compliance with U.S. laws such as Executive Order 12333 and the Foreign Intelligence Surveillance Act (FISA) , which, in particular, strictly define what constitutes private information and explicitly prohibits unlawful domestic surveillance. However, some critics have raised concerns that these laws could be interpreted in a way that allows for domestic surveillance.

While OpenAI previously held unclassified defense contracts worth $200 million, this latest agreement, backed by the largest private investment in history—$110 billion—clarifies the company's commitment to a deeper role in national security. To ease tensions across the industry, OpenAI is offering similar contract terms to other companies, including Anthropic, and is urging the government to resolve issues with those companies.

OpenAI CEO Sam Altman said on X (formerly Twitter), 'Designating Anthropic as a supply chain risk would be extremely harmful to the AI industry, the nation, and of course to Anthropic itself. We have told the Department of Defense that we acted quickly in part because we hoped to avoid escalation. While OpenAI is certainly competitive with Anthropic, building a safe superintelligence and sharing its benefits widely is far more important than any corporate competition. We believe that if we were able to, Anthropic would act to help us in the face of serious misconduct. We all should be deeply concerned by this precedent.'



Regarding the contract with the Department of Defense, he said, 'The contract was obviously rushed and didn't look good. We really wanted to calm things down, and we thought the deal that was offered was a good one. If we're right and this leads to a relaxation of tensions between the Department of Defense and the AI industry, we'll be seen as geniuses and as a company that took on a lot of pain to help the industry. If not, we'll be branded as being rushed and careless.'



in AI, Posted by log1i_yk