OpenAI has reached an agreement with the Defense Department to embed its artificial intelligence models in the department’s systems, as confirmed by CEO Sam Altman. He noted that the contract contains two main safety rules: avoiding U.S. mass monitoring and keeping human responsibility in any military use of force, including autonomous weapons. OpenAI has committed to following these safeguards in its cooperation with the Department of War. In contrast, Anthropic sees those safeguards as weak and dangerous.
This controversial deal came after President Trump told the federal government to stop using Anthropic’s services, due to concerns about them refusing to ease restrictions on AI for monitoring or fully automatic weaponry. Although OpenAI’s technology has similar weaker limitations, the deal moved forward, with requests for the same standards across all AI companies. Both OpenAI and xAI agreed to the legal and safety terms, which Anthropic had declined.
Anthropic kept its stance opposing relaxed controls and stated it would challenge any “supply-chain risk” label, emphasizing its stronger refusal to allow surveillance or autonomous weapons through its systems. Altman also stated that OpenAI would install technical protections, work closely with Defense Department engineers, and operate only in restricted cloud settings. While OpenAI’s tools are not yet on the government’s preferred Amazon cloud, the company is partnering with Amazon Web Services for business deployments.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…

