According to DefTechBeat, Google is currently in negotiations with the US Department of Defense to allow the Pentagon to deploy the Gemini AI model in a classified environment. Two sources familiar with the negotiations told The Information that the two parties are close to reaching an agreement. This marks a significant reversal in Google's stance towards the military since withdrawing from Project Maven in 2018 due to employee protests (a project where the military used AI for drone target identification).
The agreement would permit Google's AI to be used for "all lawful purposes," but Google has proposed additional terms to prohibit its use for domestic mass surveillance or autonomous weapons (including target identification without 'appropriate' human oversight and control). This wording closely mirrors the agreement OpenAI signed with the Pentagon earlier this year. OpenAI CEO Sam Altman had requested the Pentagon to apply the same terms to all AI companies.
Lawyers had pointed out structural flaws in the OpenAI contract when it came into effect: the wording that seemingly banned fully autonomous lethal weapons and domestic mass surveillance could be overridden by the clause "all lawful purposes" written into the contract. The terms proposed by Google face a similar issue.
These two security commitments were at the core of Anthropic's public fallout with the Pentagon in February this year. Anthropic CEO Dario Amodei refused to give up the ban on fully autonomous lethal weapons and domestic mass surveillance, leading the Pentagon to designate Anthropic as a "supply chain risk" and exclude it from new military contracts after a six-month transition period. Anthropic has initiated two lawsuits regarding this designation, and the status of the supply chain risk remains in effect. Several Pentagon officials hold a hostile view towards Amodei, while Google's reputation within the Pentagon is currently improving.
The classified deployment negotiations are not isolated incidents. Since the start of Trump's second term in early 2025, Google has revised its AI principles, removing explicit prohibitions on weapons and surveillance. Subsequently, military orders have been incoming: in July 2025, they secured a $200 million AI pilot contract with the Pentagon's Chief Digital and AI Office, and in December, Gemini became the first model to be integrated into the military's non-classified AI platform, GenAI.mil. In March of this year, Google announced the provision of AI agents products to automate non-classified work for the Pentagon. The classified deployment is the final puzzle piece in this roadmap.
Internal tensions have never ceased. Google's Chief AI Scientist Jeff Dean had previously publicly opposed using AI for mass surveillance or autonomous weapons at X, and over 200 Google employees had co-signed a letter to him. Dean, along with nearly 40 Google and OpenAI employees, also signed an amicus brief supporting Anthropic in the lawsuit against the Pentagon.
