• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Anthropic fortifies its trust board with new appointee, eyes defense sector for growth

New Tab News Team
September 18, 2025
Industry News

Anthropic appoints national security expert Richard Fontaine to its Long-Term Benefit Trust, emphasizing AI's security implications.

Source: Outlever.com

Anthropic has appointed national security veteran Richard Fontaine to its Long-Term Benefit Trust, signaling a deeper focus on AI's security implications as it also rolls out new AI models designed for government defense work. The twin moves underscore the AI firm's strategy to navigate the complex intersection of advanced AI development and national security demands.

  • Guarding the guardians: Anthropic's Long-Term Benefit Trust, a distinct governance body, aims to steer the company's work toward safety over pure profit, and it has the power to appoint some board members. Fontaine, CEO of the Center for a New American Security (CNAS), joins other trustees like Zachary Robinson of the Centre for Effective Altruism and Neil Buddy Shah from the Clinton Health Access Initiative, according to TechCrunch. Anthropic CEO Dario Amodei stated Fontaine's "expertise comes at a critical time as advanced AI capabilities increasingly intersect with national security considerations."

  • Claude goes classified: Fontaine’s appointment coincided closely with Anthropic’s introduction of "Claude Gov," a suite of AI models built for U.S. defense and intelligence agencies operating in classified settings. These systems are designed for tasks such as strategic planning and improved handling of sensitive data, with Anthropic noting the models "refuse less" with such information. This isn't Anthropic's first venture into defense; last November, it partnered with Palantir and AWS to offer its AI for defense applications.

Anthropic’s moves mirror a broader industry pattern, with other major AI developers also actively engaging the defense and intelligence sectors. OpenAI is working to build stronger connections with the U.S. Defense Department, Meta is making its Llama models available for similar government uses, and Google is developing a version of its Gemini AI for classified environments, as reported by Ars Technica.

  • More from the Anthropic files: Beyond board appointments, CEO Dario Amodei has been vocal, warning that AI could displace up to half of entry-level white-collar jobs in five years and potentially push unemployment to 20%. He also claimed Anthropic's AI hallucinates less than humans, though some believe he might be understating AI's disruptive job impact.

Related content

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Omer Grossman, former Chief Trust Officer and Head of the CYBR Unit at CyberArk, explains why nearly every enterprise claims to use AI but almost none have transformed the way their organizations actually operate.

Shadow AI and Departmental Silos Force Enterprises to Rethink Resilience

Nethusha Ravisuthan, Sales Support and Operations Manager at Microsoft, argues that Shadow AI, departmental silos, and ungoverned AI agents are compounding enterprise risk, and that operational trust and holistic system resilience must become foundational to AI deployment.

How Higher Education Puts Boundaries Around AI Agents With Sanctioned Access Models

Vijay Samtani, CISO at Cambridge University, discusses how blocking AI agents is a losing battle for security leaders. Their best course of action is to build clear rules and guidelines for AI access to control vulnerable surfaces.

You might also like

See all →

Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage

Powered by Island.
ISLAND, All rights reserved ©