• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Anthropic doubles down on defense with Claude Gov model

New Tab News Team
September 18, 2025
Industry News

Anthropic launches "Claude Gov" AI models tailored for U.S. national security, enhancing strategic planning and intelligence analysis.

Source: Outlever.com

Anthropic is deepening its involvement in the U.S. national security sector, launching specialized "Claude Gov" AI models for government operations.

  • Tailored for the mission: The new "Claude Gov" AI suite assists with strategic planning and intelligence analysis, built on direct feedback from government users. Anthropic states the AI is designed for improved handling of classified material—showing a tendency to “refuse less” with such data—and a deeper comprehension of intelligence and defense-related documents.

  • Boots on the ground (already): These specialized models are reportedly already deployed by U.S. national security agencies at the highest levels, operating within classified environments. Anthropic assures these tools underwent the same rigorous safety testing as its other Claude models.

  • Governance gets a hawk: Reinforcing its defense focus, Anthropic appointed Richard Fontaine, CEO of the Center for a New American Security, to its unique trust. This body, which Anthropic says champions safety above profit, can appoint board directors; Fontaine, a former foreign policy adviser to the late Sen. John McCain, holds no financial stake. CEO Dario Amodei noted Fontaine’s expertise arrives as "advanced AI capabilities increasingly intersect with national security considerations."

Anthropic's moves mirror a broader trend, with OpenAI launching ChatGPT Gov in January and other tech giants also pursuing defense contracts. Anthropic had previously partnered with Palantir and AWS in late 2024 to offer its AI for defense applications, showing a sustained push into this lucrative, complex arena.

  • Elsewhere in AI: Anthropic is not just dipping a toe but taking a significant plunge into the national security pool, betting that specialized AI and expert oversight can navigate the tricky waters of defense contracting while upholding its safety-first rhetoric. Meanwhile, the healthcare sector is also witnessing a surge in AI adoption, with LLMs transforming diagnostics and data mining, even as regulatory oversight of these tools becomes a top legal concern. In parallel, the U.S. government, through HHS, has released a strategic plan for trustworthy AI in health, while the finance industry develops specialized LLMs with a strong emphasis on compliance and risk management.

Related content

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Omer Grossman, former Chief Trust Officer and Head of the CYBR Unit at CyberArk, explains why nearly every enterprise claims to use AI but almost none have transformed the way their organizations actually operate.

Shadow AI and Departmental Silos Force Enterprises to Rethink Resilience

Nethusha Ravisuthan, Sales Support and Operations Manager at Microsoft, argues that Shadow AI, departmental silos, and ungoverned AI agents are compounding enterprise risk, and that operational trust and holistic system resilience must become foundational to AI deployment.

How Higher Education Puts Boundaries Around AI Agents With Sanctioned Access Models

Vijay Samtani, CISO at Cambridge University, discusses how blocking AI agents is a losing battle for security leaders. Their best course of action is to build clear rules and guidelines for AI access to control vulnerable surfaces.

You might also like

See all →

Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage

Powered by Island.
ISLAND, All rights reserved ©