• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Anthropic fortifies its trust board with new appointee, eyes defense sector for growth

New Tab News Team
September 18, 2025
Industry News

Anthropic appoints national security expert Richard Fontaine to its Long-Term Benefit Trust, emphasizing AI's security implications.

Source: Outlever.com

Anthropic has appointed national security veteran Richard Fontaine to its Long-Term Benefit Trust, signaling a deeper focus on AI's security implications as it also rolls out new AI models designed for government defense work. The twin moves underscore the AI firm's strategy to navigate the complex intersection of advanced AI development and national security demands.

  • Guarding the guardians: Anthropic's Long-Term Benefit Trust, a distinct governance body, aims to steer the company's work toward safety over pure profit, and it has the power to appoint some board members. Fontaine, CEO of the Center for a New American Security (CNAS), joins other trustees like Zachary Robinson of the Centre for Effective Altruism and Neil Buddy Shah from the Clinton Health Access Initiative, according to TechCrunch. Anthropic CEO Dario Amodei stated Fontaine's "expertise comes at a critical time as advanced AI capabilities increasingly intersect with national security considerations."

  • Claude goes classified: Fontaine’s appointment coincided closely with Anthropic’s introduction of "Claude Gov," a suite of AI models built for U.S. defense and intelligence agencies operating in classified settings. These systems are designed for tasks such as strategic planning and improved handling of sensitive data, with Anthropic noting the models "refuse less" with such information. This isn't Anthropic's first venture into defense; last November, it partnered with Palantir and AWS to offer its AI for defense applications.

Anthropic’s moves mirror a broader industry pattern, with other major AI developers also actively engaging the defense and intelligence sectors. OpenAI is working to build stronger connections with the U.S. Defense Department, Meta is making its Llama models available for similar government uses, and Google is developing a version of its Gemini AI for classified environments, as reported by Ars Technica.

  • More from the Anthropic files: Beyond board appointments, CEO Dario Amodei has been vocal, warning that AI could displace up to half of entry-level white-collar jobs in five years and potentially push unemployment to 20%. He also claimed Anthropic's AI hallucinates less than humans, though some believe he might be understating AI's disruptive job impact.

Related content

Security Leaders Build Adaptive Governance Frameworks to Contain Shadow AI Risk

Mahesh Varavooru, Founder of Secure AI, warns that Shadow AI creates a hidden two way risk loop and calls for runtime guardrails and sanctioned sandboxes to secure enterprise innovation.

Clear Accountability Structures Reduce Risk, Anchor AI Deployment In Real Decision Workflows

Artur Walisko, Founder and Architect of LLM Studio, argues that the AI deployment gap is an architectural failure, not an adoption problem, and that governance must be built into AI systems as a structural layer before models reach real decisions.

Cyber Resilience Replaces Breach Prevention As The Defining Measure For Enterprise Security

Theresa Lanowitz, cybersecurity evangelist and former Gartner analyst, explains why resilience and supply chain accountability are the priorities security leaders must act on in 2026.

You might also like

See all →

Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage

Powered by Island.
ISLAND, All rights reserved ©