• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

AI security’s biggest threat isn’t code, it’s the growing failure to communicate across disciplines

Island News Desk
September 18, 2025
Enterprise Security

Communication barriers among stakeholders pose a greater risk to AI security than technical flaws, according to Security Policy Researcher and Advisor Tiffany Saade.

Credit: Outlever.com

The biggest problem is that we aren't speaking the same language. You have diverse stakeholders who all agree there's a threat, but they define key terms like ‘safety’ and ‘interpretability’ in completely different ways because they're looking at them from different lenses. We end up with policy that looks amazing on paper but is either too difficult or unclear to apply on a technical level.

The most dangerous vulnerability in AI isn't in the code, it's in the conversation. Policymakers, lawyers, and engineers agree on the threat yet remain divided by jargon and discipline. The problem isn't talent, it's translation.

Tiffany Saade is an AI Security Policy Researcher at Stanford University and Consultant on AI Governance and Cybersecurity to the Ministry of Information Technology and AI of Lebanon. She sees a critical failure unfolding, not in technology itself, but in how we talk about it.

  • Lost in translation: "The biggest problem is that we aren't speaking the same language," Saade says. "You have diverse stakeholders who all agree there's a threat, but they define key terms like ‘safety’ and ‘interpretability’ in completely different ways because they're looking at them from different lenses. We end up with policy that looks amazing on paper but is either too difficult or unclear to apply on a technical level."

  • The double bind: The communication gap is widening just as AI agents emerge as powerful, dual-use tools. On one side, they serve as revolutionary co-pilots for security teams. On the other, Saade warns, they introduce a twin threat: agents becoming honeypots for bad actors, and agents weaponized to scale multi-stage cyber operations.

Compounding this is the 'pace problem,' an unavoidable reality that creates a period marked by hyper-vigilance. "You have innovation that is moving so fast," she explains. "The question is, how do we create mitigation strategies that remain relevant?"

  • The weakest link: AI innovation's breakneck, uneven progress is creating what Saade calls 'asymmetric access' to security, a vulnerability she’s witnessed firsthand. “Not everyone is prepared to counter threats from AI agents. Entire countries and institutions lack the maturity to do so, and they become weak links in the global AI innovation flow,” she says. “I come from Lebanon, a country where we barely have access and are just starting to build capacity. When you come from a place like this, you truly see the importance of leveling the playing field.”

  • Playing offense: Rather than trying to slow innovation, Saade advocates for building adaptable security through a proactive, aggressive posture. That means organizations must constantly red team their own systems to uncover vulnerabilities before adversaries do. "We have to be our own most sophisticated adversary," says Saade. "The goal is to find and fix vulnerabilities on our own terms, not on an attacker's." She argues that 'secure by design' is the only approach to build resilience against a torrent that contains ever-changing threats.

Saade warns of a "complacency trap," a subtle human risk that emerges as AI agents become more capable. The danger, she explains, is that organizations will over-rely on these agents, "giving our cognition and our security vigilance away to tools that are themselves vulnerable" and thereby creating a dangerous blind spot.

"No matter how secure your agents are, no matter if you have the perfect cybersecurity controls, you will get attacked," she states. "So the real question is, how hard are you going to get hit?" In the age of intelligent systems, breaches are par for the course, and it comes down to how well an organization absorbs, adapts, and responds.

Related content

Agentic AI Browsers Shift the Security Focus to Cultural Vulnerabilities

Joseph Sack, CEO of Smart Tech Solution LLC, explains why the primary security risk for agentic AI browsers is human behavior and how defensive AI tools can help.

Agentic AI Browsers Are Rewriting the Rules of Information Discovery and Trust

Firas Jarboui, Head of Machine Learning at Gorgias, explains how to secure Agentic AI browsers by gating actions and segregating context from workflows.

AI Browsers Need Real Oversight to Earn Enterprise Trust

Mikhail Vasilyev, a Principal Software Development Engineer at Workday, explains why AI browsers need strict visibility, containment, and auditability before enterprise use.

You might also like

See all →
AI Browsers Need Real Oversight to Earn Enterprise Trust
Island's Solutions Engineering Director on Overcoming Resistance to Public Sector Modernization
AI and the Evolving Face of Social Engineering: A Call for Smarter, Connected Defense
Powered by Island.
© ISLAND, 2025. All rights reserved