• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Anthropic doubles down on defense with Claude Gov model

Island News Desk
September 18, 2025
Industry News

Anthropic launches "Claude Gov" AI models tailored for U.S. national security, enhancing strategic planning and intelligence analysis.

Source: Outlever.com

Anthropic is deepening its involvement in the U.S. national security sector, launching specialized "Claude Gov" AI models for government operations.

  • Tailored for the mission: The new "Claude Gov" AI suite assists with strategic planning and intelligence analysis, built on direct feedback from government users. Anthropic states the AI is designed for improved handling of classified material—showing a tendency to “refuse less” with such data—and a deeper comprehension of intelligence and defense-related documents.

  • Boots on the ground (already): These specialized models are reportedly already deployed by U.S. national security agencies at the highest levels, operating within classified environments. Anthropic assures these tools underwent the same rigorous safety testing as its other Claude models.

  • Governance gets a hawk: Reinforcing its defense focus, Anthropic appointed Richard Fontaine, CEO of the Center for a New American Security, to its unique trust. This body, which Anthropic says champions safety above profit, can appoint board directors; Fontaine, a former foreign policy adviser to the late Sen. John McCain, holds no financial stake. CEO Dario Amodei noted Fontaine’s expertise arrives as "advanced AI capabilities increasingly intersect with national security considerations."

Anthropic's moves mirror a broader trend, with OpenAI launching ChatGPT Gov in January and other tech giants also pursuing defense contracts. Anthropic had previously partnered with Palantir and AWS in late 2024 to offer its AI for defense applications, showing a sustained push into this lucrative, complex arena.

  • Elsewhere in AI: Anthropic is not just dipping a toe but taking a significant plunge into the national security pool, betting that specialized AI and expert oversight can navigate the tricky waters of defense contracting while upholding its safety-first rhetoric. Meanwhile, the healthcare sector is also witnessing a surge in AI adoption, with LLMs transforming diagnostics and data mining, even as regulatory oversight of these tools becomes a top legal concern. In parallel, the U.S. government, through HHS, has released a strategic plan for trustworthy AI in health, while the finance industry develops specialized LLMs with a strong emphasis on compliance and risk management.

Related content

Agentic AI Browsers Shift the Security Focus to Cultural Vulnerabilities

Joseph Sack, CEO of Smart Tech Solution LLC, explains why the primary security risk for agentic AI browsers is human behavior and how defensive AI tools can help.

Agentic AI Browsers Are Rewriting the Rules of Information Discovery and Trust

Firas Jarboui, Head of Machine Learning at Gorgias, explains how to secure Agentic AI browsers by gating actions and segregating context from workflows.

AI Browsers Need Real Oversight to Earn Enterprise Trust

Mikhail Vasilyev, a Principal Software Development Engineer at Workday, explains why AI browsers need strict visibility, containment, and auditability before enterprise use.

You might also like

See all →
Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race
Report says majority of employees embrace AI unsupervised, leaving companies vulnerable
New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage
Powered by Island.
© ISLAND, 2025. All rights reserved