• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security
  • Cloud & SaaS

 Back to New Tab

Enterprise AI Becomes Critical Infrastructure as Gap Between Security and Governance Grows

Island News Desk
October 7, 2025
Enterprise Security

Aaron Mathews, Global Head of Cybersecurity at Orion Innovation, explains why AI is becoming essential to business operations even though security and governance frameworks haven't kept pace.

Credit: Outlever

The language of AI security is often too technical to resonate in the boardroom, a huge barrier to getting buy-in. We need to stop discussing abstract threats and start framing risk in terms of concrete business impacts. When you can explain that a vulnerability could lead to significant financial loss or major regulatory fines, that is when executives will start to listen and assign the resources needed.

Even though AI is already a core part of business operations, the frameworks to secure it lag dangerously behind. Despite lacking the necessary governance and security infrastructure to manage AI effectively, leaders continue to integrate it into critical workflows. Now, some experts say the gap could introduce significant risks.

For an insider's take, we spoke to Aaron Mathews, Global Head of Cybersecurity at digital transformation company Orion Innovation. A cybersecurity executive with over 20 years of experience building enterprise security programs, Mathews has spent his entire career navigating complex environments. From leading global cyber audit teams at Scotiabank and co-founding the NFT marketplace Token Bazaar to managing security for essential government infrastructure like Canada's largest airport (GTAA) and Ontario's largest power producer, Mathews has had a long, hard look at what it takes to succeed.

From his perspective, the AI gap stems from a fundamental misunderstanding about its role in the enterprise. Technical controls alone are not enough, he says. Instead, AI security relies equally on a formal governance model established from day one.

  • First things first: A risk assessment is a formal process that forces the business to manage AI with proper rigor, Mathews explains. "From a governance standpoint, there is a clear first step: conduct an AI risk assessment before any program is deployed. A step like this is not optional. The process is what forces the organization to establish the right governance mental model from day one, before AI becomes deeply embedded in the infrastructure."

  • Money talks: For an assessment to be meaningful, however, its findings must be translated into the language of the boardroom, Mathews continues. "The language of AI security is often too technical to resonate in the boardroom, a huge barrier to getting buy-in. We need to stop discussing abstract threats and start framing risk in terms of concrete business impacts. When you can explain that a vulnerability could lead to significant financial loss or major regulatory fines, that is when executives will start to listen and assign the resources needed."

Beyond prevention, preparation extends into planning for a new type of failure, Mathews says. Because, for most, a compromised AI model represents a new category of risk that existing response plans simply cannot handle.

  • When models go bad: The goal is to have a documented process before a crisis hits, Mathews clarifies. "We have incident response playbooks for a server breach, so why don't we have them for a compromised AI model? We absolutely need them. These playbooks must answer critical questions before a crisis happens: How can our models be compromised? How could an attacker shut them down? And what are the exact technical processes our teams will use to respond, recover, and restore trust in the system?"

  • Follow the standards: Emerging standards offer a credible, third-party roadmap for moving from theory to practice, he continues. "The good news is that we aren't operating in a vacuum. Credible compliance frameworks have already emerged, like ISO 42001, the NIST AI Risk Management Framework, and the EU AI Act. These are critically important standards that provide a clear roadmap for success. The task for leaders now is to pick them up and start implementing them."

But the transition is happening faster than most boardrooms can adapt, he continues. "AI has quietly moved from being an experimental project into the heart of business operations. It’s already approving loans, optimizing supply chains, and even drafting contracts. A system like that is core infrastructure, and we need to start treating it that way." However, the mindset in the boardroom hasn't caught up yet, he says.

  • The plug-in problem: Neglected foundational security is a direct result of outdated thinking, according to Mathews. Here, key elements like AI-specific monitoring and incident response plans are often overlooked entirely. "Too many leaders treat AI like a simple plug-in instead of the core operational system it has become. When you think of something as just another tool, you don't give it the same security rigor. A mistake like that is how systemic risk gets introduced across the entire environment."

Eventually, organizations will have to confront the threats entirely unique to AI, Mathews explains. Unlike traditional attacks on infrastructure, these new vectors corrupt the model's logic itself, turning the AI into a tool for the attacker. "Threats like prompt injection are fundamentally different from traditional attacks. Data poisoning, for example, is the act of manipulating an AI's training data to corrupt its logic. By adding bias, an attacker can ensure the decisions the model makes are the ones they want it to make."

  • Look for the spike: Connecting the threat directly to a practical detection method, Mathews describes how attackers attempting to poison a model often leave an unmistakable signature. "Just as the threats are unique, so are the detection methods. We have to monitor AI systems for anomalies. One of the clearest indicators of an attack is in the API traffic. A sudden, unexplained spike in API calls is a major red flag. It often means an attacker is trying to execute a data poisoning attack by overwhelming the model with malicious inputs."

Fortunately, most security teams won't need to reinvent the wheel, Mathews explains. Instead, they can ground the new threat in existing, respected frameworks.

  • Old rules, new game: Applying a familiar principle like zero trust can help leaders build a solid foundation for AI security, he explains. "We don't have to start from scratch. The most effective first step is to adapt the proven security principles we already use. We can, and should, implement a Zero Trust architecture for AI. That means treating every AI model and agent as untrusted until it proves otherwise, enforcing strict access controls, authentication, and comprehensive logging, just as we would for any other critical system."

On ownership, Mathews gives a definitive answer. Delegating responsibility to the CSO is the best solution, in his mind. "Ultimately, there needs to be a single, clear owner. The Chief Security Officer must be responsible for protecting the organization's AI applications, projects, and programs. There can be no ambiguity on this point." Meanwhile, internal teams play an important role with third-party vendors: "To build the 'sandbox' and ensure the right infrastructure controls are in place to contain the AI. The goal is to create a restricted zone so that even if a model is manipulated or compromised, the damage cannot seep into the core infrastructure."

As AI adoption accelerates, the challenge for leaders is clear, Mathews concludes. Now is the time to treat AI as the critical infrastructure it has already become. "We are in a truly dangerous phase where AI is already mission-critical, but it is not being governed like mission-critical infrastructure. While challenging, it will be manageable if leaders are willing to adopt the new frameworks required to navigate it."

Related content

Arizona State University CISO Makes Security a Business Function to Speed Research Safely

Lester Godsey, Chief Information Security Officer for Arizona State University, explains why the CISO role is evolving from a defensive gatekeeper to a strategic business enabler, and how modern security leaders can adapt for success.

Hindsight Comes at High Cost for Security Leaders as 'Bolt-On' Security Breaks Budgets in OT

Gernette Wright, IT Security Officer, Americas at Schneider Electric, on threats to legacy OT systems and failed human patches.

Former GoDaddy CSO Talks Past, Present, and Future of AI in Corporate Security

Jason Veiock, CEO of Bearing and former CSO at GoDaddy, explains why corporate security leaders must modernize enterprise infrastructure before adopting AI.

You might also like

See all →
Enterprise AI Becomes Critical Infrastructure as Gap Between Security and Governance Grows
Hindsight Comes at High Cost for Security Leaders as 'Bolt-On' Security Breaks Budgets in OT
How a Senior Telecom Engineer Spots Security Risks Hidden in 'Patchwork' IT
Powered by Island.
© ISLAND, 2025. All rights reserved