随着五角大楼重新评估对现有供应商的依赖并探索新合作伙伴,Alphabet 旗下的谷歌公司正重新回到美国国防人工智能的关注中心。
在围绕 Claude 等工具出现紧张局势后,军方正在扩大其选择范围。一份新报告显示,谷歌可能很快在这一转变中发挥更大的作用。这家科技巨头正就在机密国防环境中引入其 Gemini 模型进行谈判,这标志着硅谷与国家安全优先事项之间的重新接轨。
据《The Information》援引两名知情人士消息,相关谈判涉及谷歌与美国国防部之间的一项潜在协议,旨在将 Gemini 用于机密及其他合法用途。五角大楼正加强将先进人工智能整合到操作流程中的努力。官员们日益将此类系统视为决策速度和战场态势感知的关键。
五角大楼的一名官员告诉《新闻周刊》:“五角大楼将继续通过在所有机密级别上建立强大的工业合作伙伴关系,快速向战士部署前沿人工智能能力。”国防部并未致力于单一供应商,而是特意采取了分散投资的策略,在建立内部框架进行管理的同时,继续测试多个人工智能平台。
五角大楼早前使用 Anthropic 的 Claude 模型反映了这种审慎做法。该系统在严格控制下支持分析和决策任务。官员们一直强调,人工智能是增强而非取代人类判断。然而,寻找替代方案表明单一系统的作用存在局限。随着人工智能能力的迅速演进,国防策划者希望保持灵活性。可靠性担忧也影响了这些决策,最近的一项分析显示,Gemini 驱动的人工智能搜索在约 9% 的情况下会产生错误响应。
Alphabet’s Google is moving back into the U.S. defense AI spotlight, as the Pentagon reassesses its reliance on existing vendors and explores new partners.
After tensions emerged around tools like Claude, the military is widening its options.
A new report suggests Google could soon play a larger role in that shift.
The tech giant is in talks to bring its Gemini models into classified defense environments, signaling a renewed alignment between Silicon Valley and national security priorities.
Gemini enters defense pipeline
The development was first reported by The Information, which cited two sources familiar with the discussions.
The talks involve a potential agreement between Google and the U.S. Department of Defense to deploy Gemini for classified and other lawful uses.
The move comes as the Pentagon intensifies efforts to integrate advanced AI into operational workflows.
Officials increasingly see such systems as critical to decision-making speed and battlefield awareness.
A Pentagon official told Newsweek: “The Pentagon will continue to rapidly deploy frontier AI capabilities to the warfighter through strong industry partnerships across all classification levels.”
Rather than committing to one provider, the Defense Department has deliberately spread its bets. It continues to test multiple AI platforms while building internal frameworks to manage them.
Expanding beyond single vendor
The Pentagon’s earlier use of Anthropic’s Claude reflects this cautious approach.
The system has supported analytical and decision-support tasks under strict controls.
Officials have consistently emphasized that AI augments human judgment rather than replacing it.
Still, the search for alternatives suggests limits to any single system’s role.
Defense planners want flexibility as AI capabilities evolve rapidly.
Reliability concerns also shape these decisions. A recent analysis cited by Futurism found that Gemini-powered AI search produced incorrect responses about 9 percent of the time.
At the Pentagon’s scale, even small error rates can introduce operational risk.
Beyond accuracy, officials must weigh how AI systems behave under pressure.
High-stakes environments demand predictable outputs, auditability, and clear human oversight.
These factors continue to influence how quickly and deeply AI tools enter mission-critical workflows.
Google appears to be addressing those concerns directly in its negotiations. The company has reportedly proposed contractual safeguards to restrict how Gemini can be used in defense contexts.
The reported provisions aim to block applications such as domestic mass surveillance and autonomous weapons without meaningful human control.
That language reflects broader industry pressure to define limits on military AI use.
Google’s re-engagement with defense work follows years of internal debate and public scrutiny.
Earlier projects drew employee protests and forced the company to rethink its approach to military partnerships.
Now, the company se