OpenAI’s Pentagon deal faces scrutiny as Sam Altman calls it sloppy and opportunistic
OpenAI is revising its fast-tracked agreement to provide AI capabilities to the U.S. Department of Defense after its chief executive admitted the arrangement appeared opportunistic and sloppy. The contract raised concerns that OpenAI’s technology could be used for domestic mass surveillance, but Altman stated that the company would explicitly bar such uses and prevent deployment by defense intelligence agencies like the NSA.
OpenAI, whose ChatGPT has hundreds of millions of users, moved quickly to secure the deal following the loss of its prior AI contractor, Anthropic. Anthropic had argued that using these systems for mass domestic surveillance would be incompatible with democratic values, a stance that drew sharp political backlash, including comments from then-President Donald Trump who dismissed Anthropic and urged the federal government to halt using its technology.
Despite OpenAI’s denials that the agreement allowed surveillance, commentators drew parallels to the Snowden revelations of 2013, when it emerged that the NSA engaged in broad collection of phone and internet data. The deal ignited online protests, with users on platforms like X and Reddit urging a “delete ChatGPT” movement. A notable Reddit post challenged the premise by asking for proof of cancellation now that a war machine is being trained.
Anthropic’s Claude briefly topped Apple’s App Store charts, according to Sensor Tower, as attention shifted away from ChatGPT.
In an internal message shared on X, Altman acknowledged that the initial Friday announcement was too hasty after Anthropic’s exit. He wrote, “We shouldn’t have rushed to get this out on Friday. The issues are super complex and demand clear communication. We were trying to de-escalate things and avoid a much worse outcome, but it ended up looking opportunistic and sloppy.”
When the deal was announced, OpenAI described the agreement as having “more guardrails than any previous arrangement for classified AI deployments, including Anthropic’s.” Nevertheless, concerns about military use of AI persisted, prompting more than 900 OpenAI and Google employees to sign an open letter urging leaders to refuse DoD access to their products for surveillance or autonomous weaponry without human oversight.
The letter warned that the government was attempting to sow fear among companies and urged a united stance to oppose the DoD’s demands. It was signed by 796 Google employees and 98 OpenAI staff. OpenAI has stated that a red line in its own policy is to prevent using its technology to direct autonomous weapons systems.
Critics have questioned how OpenAI could secure a deal that addresses ethical concerns Anthropic said were insurmountable. Miles Brundage, a former OpenAI policy lead, suggested that some in the organization may have compromised or misrepresented the outcome, while others argued that the company deserves credit for pursuing a fair resolution within democratic processes. Brundage emphasized the importance of government decisions being made through democratic means and stressed the need for the tech community to have a voice at the table.
Separately, three other U.S. cabinet agencies—the State Department, the Treasury, and Health and Human Services—moved to end use of Anthropic’s AI products after the Department of Defense designated Anthropic a supply-chain risk. President Trump has directed a government-wide shift away from Anthropic’s offerings in response to defense concerns.
Would you agree with OpenAI’s reassessment and tighter guardrails, or do you think the collaboration with the DoD is a necessary step for responsible AI development? Share your thoughts in the comments.