• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

AI security’s biggest threat isn’t code, it’s the growing failure to communicate across disciplines

New Tab News Team
September 18, 2025
Enterprise Security

Communication barriers among stakeholders pose a greater risk to AI security than technical flaws, according to Security Policy Researcher and Advisor Tiffany Saade.

Credit: Outlever.com

The biggest problem is that we aren't speaking the same language. You have diverse stakeholders who all agree there's a threat, but they define key terms like ‘safety’ and ‘interpretability’ in completely different ways because they're looking at them from different lenses. We end up with policy that looks amazing on paper but is either too difficult or unclear to apply on a technical level.

The most dangerous vulnerability in AI isn't in the code, it's in the conversation. Policymakers, lawyers, and engineers agree on the threat yet remain divided by jargon and discipline. The problem isn't talent, it's translation.

Tiffany Saade is an AI Security Policy Researcher at Stanford University and Consultant on AI Governance and Cybersecurity to the Ministry of Information Technology and AI of Lebanon. She sees a critical failure unfolding, not in technology itself, but in how we talk about it.

  • Lost in translation: "The biggest problem is that we aren't speaking the same language," Saade says. "You have diverse stakeholders who all agree there's a threat, but they define key terms like ‘safety’ and ‘interpretability’ in completely different ways because they're looking at them from different lenses. We end up with policy that looks amazing on paper but is either too difficult or unclear to apply on a technical level."

  • The double bind: The communication gap is widening just as AI agents emerge as powerful, dual-use tools. On one side, they serve as revolutionary co-pilots for security teams. On the other, Saade warns, they introduce a twin threat: agents becoming honeypots for bad actors, and agents weaponized to scale multi-stage cyber operations.

Compounding this is the 'pace problem,' an unavoidable reality that creates a period marked by hyper-vigilance. "You have innovation that is moving so fast," she explains. "The question is, how do we create mitigation strategies that remain relevant?"

  • The weakest link: AI innovation's breakneck, uneven progress is creating what Saade calls 'asymmetric access' to security, a vulnerability she’s witnessed firsthand. “Not everyone is prepared to counter threats from AI agents. Entire countries and institutions lack the maturity to do so, and they become weak links in the global AI innovation flow,” she says. “I come from Lebanon, a country where we barely have access and are just starting to build capacity. When you come from a place like this, you truly see the importance of leveling the playing field.”

  • Playing offense: Rather than trying to slow innovation, Saade advocates for building adaptable security through a proactive, aggressive posture. That means organizations must constantly red team their own systems to uncover vulnerabilities before adversaries do. "We have to be our own most sophisticated adversary," says Saade. "The goal is to find and fix vulnerabilities on our own terms, not on an attacker's." She argues that 'secure by design' is the only approach to build resilience against a torrent that contains ever-changing threats.

Saade warns of a "complacency trap," a subtle human risk that emerges as AI agents become more capable. The danger, she explains, is that organizations will over-rely on these agents, "giving our cognition and our security vigilance away to tools that are themselves vulnerable" and thereby creating a dangerous blind spot.

"No matter how secure your agents are, no matter if you have the perfect cybersecurity controls, you will get attacked," she states. "So the real question is, how hard are you going to get hit?" In the age of intelligent systems, breaches are par for the course, and it comes down to how well an organization absorbs, adapts, and responds.

Related content

Cyber Resilience Replaces Breach Prevention As The Defining Measure For Enterprise Security

Theresa Lanowitz, cybersecurity evangelist and former Gartner analyst, explains why resilience and supply chain accountability are the priorities security leaders must act on in 2026.

Cyber Risk Accountability Moves Beyond Technical Teams To Executive Leadership

Muhammad Arshi Wasique, GM of MEA Operations at ThreatCure, reframes cyber risk as a financial tradeoff, pushing accountability from CISOs to CFOs and boards.

In Local Government, Cybersecurity Success Comes From Doing More With Less

Shane McDaniel, CIO for the City of Seguin, shows how municipal cybersecurity moves forward through resourcefulness, trust, and community when budgets and priorities collide.

You might also like

See all →

Cyber Resilience Replaces Breach Prevention As The Defining Measure For Enterprise Security

Cyber Risk Accountability Moves Beyond Technical Teams To Executive Leadership

In Local Government, Cybersecurity Success Comes From Doing More With Less

Powered by Island.
ISLAND, All rights reserved ©