• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Anthropic fortifies its trust board with new appointee, eyes defense sector for growth

Island News Desk
September 18, 2025
Industry News

Anthropic appoints national security expert Richard Fontaine to its Long-Term Benefit Trust, emphasizing AI's security implications.

Source: Outlever.com

Anthropic has appointed national security veteran Richard Fontaine to its Long-Term Benefit Trust, signaling a deeper focus on AI's security implications as it also rolls out new AI models designed for government defense work. The twin moves underscore the AI firm's strategy to navigate the complex intersection of advanced AI development and national security demands.

  • Guarding the guardians: Anthropic's Long-Term Benefit Trust, a distinct governance body, aims to steer the company's work toward safety over pure profit, and it has the power to appoint some board members. Fontaine, CEO of the Center for a New American Security (CNAS), joins other trustees like Zachary Robinson of the Centre for Effective Altruism and Neil Buddy Shah from the Clinton Health Access Initiative, according to TechCrunch. Anthropic CEO Dario Amodei stated Fontaine's "expertise comes at a critical time as advanced AI capabilities increasingly intersect with national security considerations."

  • Claude goes classified: Fontaine’s appointment coincided closely with Anthropic’s introduction of "Claude Gov," a suite of AI models built for U.S. defense and intelligence agencies operating in classified settings. These systems are designed for tasks such as strategic planning and improved handling of sensitive data, with Anthropic noting the models "refuse less" with such information. This isn't Anthropic's first venture into defense; last November, it partnered with Palantir and AWS to offer its AI for defense applications.

Anthropic’s moves mirror a broader industry pattern, with other major AI developers also actively engaging the defense and intelligence sectors. OpenAI is working to build stronger connections with the U.S. Defense Department, Meta is making its Llama models available for similar government uses, and Google is developing a version of its Gemini AI for classified environments, as reported by Ars Technica.

  • More from the Anthropic files: Beyond board appointments, CEO Dario Amodei has been vocal, warning that AI could displace up to half of entry-level white-collar jobs in five years and potentially push unemployment to 20%. He also claimed Anthropic's AI hallucinates less than humans, though some believe he might be understating AI's disruptive job impact.

Related content

Agentic AI Browsers Shift the Security Focus to Cultural Vulnerabilities

Joseph Sack, CEO of Smart Tech Solution LLC, explains why the primary security risk for agentic AI browsers is human behavior and how defensive AI tools can help.

Agentic AI Browsers Are Rewriting the Rules of Information Discovery and Trust

Firas Jarboui, Head of Machine Learning at Gorgias, explains how to secure Agentic AI browsers by gating actions and segregating context from workflows.

AI Browsers Need Real Oversight to Earn Enterprise Trust

Mikhail Vasilyev, a Principal Software Development Engineer at Workday, explains why AI browsers need strict visibility, containment, and auditability before enterprise use.

You might also like

See all →
Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race
Report says majority of employees embrace AI unsupervised, leaving companies vulnerable
New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage
Powered by Island.
© ISLAND, 2025. All rights reserved