← 返回简报
DIRECT2026年4月17日
谷歌正与美国国防部商讨在机密环境部署Gemini人工智能
Google in talks with Pentagon to deploy Gemini AI after Claude limits dispute
趣味工程工程科技媒体,适合抓军工创新、AI、无人系统与前沿装备
谷歌正与美国国防部商讨在机密环境部署Gemini人工智能

Alphabet’s Google is moving back into the U.S. defense AI spotlight, as the Pentagon reassesses its reliance on existing vendors and explores new partners.
After tensions emerged around tools like Claude, the military is widening its options.
A new report suggests Google could soon play a larger role in that shift.
The tech giant is in talks to bring its Gemini models into classified defense environments, signaling a renewed alignment between Silicon Valley and national security priorities.
Gemini enters defense pipeline
The development was first reported by The Information, which cited two sources familiar with the discussions.
The talks involve a potential agreement between Google and the U.S. Department of Defense to deploy Gemini for classified and other lawful uses.
The move comes as the Pentagon intensifies efforts to integrate advanced AI into operational workflows.
Officials increasingly see such systems as critical to decision-making speed and battlefield awareness.
A Pentagon official told Newsweek: “The Pentagon will continue to rapidly deploy frontier AI capabilities to the warfighter through strong industry partnerships across all classification levels.”
Rather than committing to one provider, the Defense Department has deliberately spread its bets. It continues to test multiple AI platforms while building internal frameworks to manage them.
Expanding beyond single vendor
The Pentagon’s earlier use of Anthropic’s Claude reflects this cautious approach.
The system has supported analytical and decision-support tasks under strict controls.
Officials have consistently emphasized that AI augments human judgment rather than replacing it.
Still, the search for alternatives suggests limits to any single system’s role.
Defense planners want flexibility as AI capabilities evolve rapidly.
Reliability concerns also shape these decisions. A recent analysis cited by Futurism found that Gemini-powered AI search produced incorrect responses about 9 percent of the time.
At the Pentagon’s scale, even small error rates can introduce operational risk.
Beyond accuracy, officials must weigh how AI systems behave under pressure.
High-stakes environments demand predictable outputs, auditability, and clear human oversight.
These factors continue to influence how quickly and deeply AI tools enter mission-critical workflows.
Google appears to be addressing those concerns directly in its negotiations. The company has reportedly proposed contractual safeguards to restrict how Gemini can be used in defense contexts.
The reported provisions aim to block applications such as domestic mass surveillance and autonomous weapons without meaningful human control.
That language reflects broader industry pressure to define limits on military AI use.
Google’s re-engagement with defense work follows years of internal debate and public scrutiny.
Earlier projects drew employee protests and forced the company to rethink its approach to military partnerships.
Now, the company se