• CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

The Promise of AI Comes from Governing Systems that Don't Sit Still

New Tab News Team
April 21, 2026
AI

Syeda Iram Fatima Jafry, Senior Manager of Data Risk and Privacy at PwC, discusses the shifting target of AI governance and recommends that leaders govern outputs as much as inputs, with a focus on ongoing change.

Credit: The Security Digest

Traditional rule-based governance is not going to work, because AI changes at execution time. We need to focus on the outputs, the outcomes, and the behaviors it produces.

For decades, corporate compliance meant writing hard-coded rules for a point-in-time governance approach. As AI evolves as an enterprise solution, however, it's quickly become apparent that this approach is a dead end. AI models change and drift once they enter production; static snapshot audits no longer make sense, and teams are realizing they need to govern outputs rather than just inputs. For organizations experimenting most aggressively, the focus is moving away from rigid process checks and toward close scrutiny of the outputs and outcomes systems produce.

Syeda Iram Fatima Jafry sits right in the middle of that transition. As a Senior Manager of Data Risk and Privacy at PwC, she spends her days navigating these exact hurdles. With more than 12 years of experience building global compliance programs, she holds a deep bench of certifications and a career in the trenches on legacy SOX audits and control frameworks. She now works at the intersection of technical execution and corporate accountability, and struggles with the common challenge of applying traditional governance on the edge of AI innovation.

"Traditional rule-based governance is not going to work, because AI changes at execution time," Jafry says. "We need to focus on the outputs, the outcomes, and the behaviors it produces."

  • The magic wand myth: Jafry says the disconnect often starts at the top, with leaders who expect instant payoff from a technology that actually needs cycles of tuning to stabilize. "Stakeholders have a tendency to think that AI is a magic wand, expecting an abracadabra moment where it instantly answers questions and solves problems. Fine-tuning is required, and that takes time. You have to go through multiple cycles because there are variations that we expect, and variations we don't know about that come as surprises to us."

For Jafry, the biggest roadblock to enterprise AI adoption is rarely technical. Some leaders still treat the technology as an instant fix, assuming a system will operate flawlessly on day one. Moving past that mindset is an operational learning curve that requires teams to budget real time for cyclical fine-tuning and continuous monitoring. Even with high-level frameworks like the NIST RMF on the table, teams ultimately have to navigate day-to-day model drift in production.

When a model hallucinates, the corporate knee-jerk reaction is to slap a human reviewer on every single output. It seems safer, but manual review can easily offset the efficiency that automation was supposed to deliver. In response, Jafry’s clients are starting to redefine the traditional human-in-the-loop model around risk thresholds rather than blanket review. Initial baselines are often built from legacy, non-AI data. Over time, teams feed the model’s live outputs back into the governance process to refine those boundaries. Jafry says her team anchors governance meetings on system output, including weekly changes, to provide a single source of truth for all governance discussions and decisions.

  • Anatomical anomalies: To illustrate why real-world models demand hands-on oversight, Jafry points to a healthcare experiment her team ran that produced medically impossible results. "We were playing around with a healthcare use case with some data, and the model came up with some predictions where women were being diagnosed for problems with anatomical parts that they don't have. That's going to require a bit of cleanup."

  • Drawing the line: Jafry says the goal is to reserve human intervention for cases that genuinely warrant it, rather than reviewing every exception by default. "You don't want to have an agent or an AI model where you have a human approving every exception. That's ridiculous, and it's completely contrary to introducing AI and having that efficiency. What we've been thinking about is defining a threshold that is an acceptable level of risk to be handled by the agent. But above that threshold, we're going to have alarms and alerts where we involve humans."

As teams move more work to autonomous systems, these behaviors are often factored into corporate risk discussions. The move to more agentic AI is particularly visible in highly regulated environments, where organizations are experimenting with automation for repetitive, rules-based work. For example, some use AI to pull data and structure materials for SOX reporting. That creates a new meta-governance layer. Many companies are now trying to govern the autonomous agents that support their compliance workflows, while still building the capabilities to verify those agents’ decisions. That capability gap is one reason audit committees and boards are seeking deeper oversight of AI risk management.

The same pattern plays out in SOX itself. Jafry notes that executives tend to take those obligations seriously because of the mandatory reporting and personal sign-offs involved. That built-in accountability has made SOX an early proving ground for AI assistance. Teams usually start by automating well-bounded, repetitive work, keeping human judgment in the loop for accounting decisions. She says that introducing agents into these workflows adds another set of controls to design and test. Teams must verify the financial process itself, as well as the controls governing how agents behave and are monitored. That mirrors a wider conversation in operations about governing AI agents and their decision-making, not just the underlying models.

  • Top-down or not at all: Jafry says the organizations making real governance progress share one trait: a senior executive who has personally claimed the AI risk portfolio rather than leaving it to a committee. "The companies that have done better are those where the leadership has taken initiative. You have identified ownership where somebody at the top is pushing it. Be it your steering committee, be it your IT, but you need somebody from the C-suite."

  • The missing middle: Jafry says ownership tends to collapse when AI risk gets pushed down the org chart without executive backing. "When it's delegated to middle-tier management, the biggest challenge has been determining the ownership because everybody's got their day jobs, and you get people insisting they've done their bit and it isn't their problem."

Translating that top-down oversight into daily operations can be difficult as AI projects move down the org chart. A primary hurdle is the tendency to treat a single governance framework as if it can blanket an entire enterprise. When implementations are delegated without clarity on ownership, it becomes harder to identify and manage unmonitored tools and shadow AI across decentralized business units. Organizations that treat AI governance with the same top-down accountability as financial reporting tend to be more successful in integrating and scaling AI agents into existing workflows.

Underneath those structural questions is a human one. Bypassing legacy process owners creates unnecessary friction by ignoring their valuable institutional knowledge. Those veteran employees often hold the exact data needed to make the AI work. They understand how to bridge difficult handoffs between teams. If AI is introduced as something imposed on them rather than built with them, projects stall. Reversing that trend and reducing workplace friction around AI adoption requires a collaborative model between human domain experts and AI tools, rather than a replacement mindset.

  • The conglomerate conundrum: Jafry says the sheer variety of use cases inside large enterprises makes a single governance playbook unworkable. "Large tech companies are doing so many things that use cases within one business unit are so varied. It's challenging to say that this is the approach you use for business unit A, and this is the approach you use for business unit B. This is not going to work. You need that tweaking and nuance, and you need that constant monitoring to see that you're updating your governance mechanism."

  • Cue the crusaders: Jafry says the process owners who know the work best are too often sidelined when AI lands, and that exclusion is itself a governance risk. "When we bring in AI, and it seems like an imposition, it's not well received by people with organizational or process knowledge, which is a gold mine. They can really provide those missing dots. The AI crusaders are here. They're going to come and take over, they're going to redesign and change everything, and it's going to be a revolution. Do not forget the people who are a significant part of it."

The next phase of AI governance will likely hinge less on finding the perfect control framework and more on basic resource allocation. Many organizations have a working sense of the "what"—the models, tools, and policies in play—and are iterating on those quickly. A frequently observed gap is a lack of clear ownership. Teams still need to decide who is accountable for outcomes, who is empowered to intervene, and who is expected to partner across functions.

What ties all of this together is a posture shift. Governance used to be something an organization could finish. Now it's something an organization has to keep doing, in a different shape, for every use case it takes on. Jafry says the discipline itself has changed more than any single control has. "The biggest takeaway for us as governance professionals has been that it's a moving target, and we need to be alert twenty-four seven so that we don't miss out on things," she says. "It's not that one approach covers all. It's going to slightly change from use case to use case, from organization to organization, from team to team."

Related content

To Justify Cybersecurity Spend Before A Crisis, Leaders Learn The Language Of Invisible ROI

Greg McCord, CISO at Lightcast, explains how cybersecurity leaders should learn the language of ROI and describes how AI and a positive mindset can help translate value to the board.

ASU’s CISO Pushes AI Data Governance Upstream To Procurement

Lester Godsey, CISO at Arizona State University, explains why AI vendor contracts have become the frontline data privacy battle in higher education, and how ASU is winning it.

Cyber Resilience Is About Planning, Practice, and Patience, not Urgency

Aurobindo Sundaram, CISO with RELX, argues that the pressure for answers during crises hinders recovery, and the way forward is with clear templates, plans, and patience.

You might also like

See all →

The Promise of AI Comes from Governing Systems that Don't Sit Still

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Shadow AI and Departmental Silos Force Enterprises to Rethink Resilience

Powered by Island.
ISLAND, All rights reserved ©