• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

‘The Last Six Months Have Changed Everything’: How Security Experts are Governing AI in Real Time

New Tab News Team
March 25, 2026
AI

Vineet Love, VP and Deputy Head of Cybersecurity with DigitalNet.ai, describes his GRC playbook for security leaders engaging with rapidly evolving agentic AI infrastructure.

Credit: Outlever

The speed of innovation in the last six months has changed everything. Security is being rethought in real time.

AI adoption is prompting a fundamental rethink of enterprise security, particularly in identity, vulnerability management, and control testing. The move from hype to solutions that solve real-world problems is supercharging adoption, and security leaders are equally racing to keep up with innovation. At the forefront of this race is a core question: how to apply established security governance to a new class of autonomous tools powered by a crowded vendor market.

According to Vineet Love, the Vice President and Deputy Head of Cybersecurity Practice for North America at the enterprise digital transformation company DigitalNet.ai, the market has entered a new phase of real-world deployment over the last six months. As a long-time security expert for companies like BDO, Accenture, and Mazars, Love recognizes that the journey from hype to solution pulls an entire organization along, including security departments. “The speed of innovation in the last six months has changed everything, and security is being rethought in real time," Love says.

That new velocity is fueling a fast-moving vendor environment, creating new considerations for security leaders. Agentic AI blurs the lines between traditional agent identity and security; it introduces new and complex attack surfaces, including supply-chain risks posed by on-premises agents connecting to external LLMs. The long-standing problem of shadow IT is re-emerging as shadow AI, with many organizations embracing generative AI in the workplace in ways that can create a major governance gap outside established security frameworks.

  • One-stop shops: For Love, the vendor space is rapidly changing as companies race to incorporate as many agentic AI capabilities as possible. “Traditional product companies are adding AI capabilities to their products, and they are funding R&D two or three times faster than before. This is driving vendor consolidation because everyone wants to be the one-stop shop in the agentic space," he explains.

To manage the new reality, Love suggests that leaders apply a proven playbook, adapting core pillars such as governance, risk, and compliance and identity fundamentals to meet the unique challenges of governing the rise of autonomous agents, where conventional cybersecurity may prove insufficient. Identity, as Love describes, is a major perimeter for security teams managing access for both human and AI agents: “The fundamentals in identity do not change. You still have the same controls, but now you are granting access to agents instead of humans. Concepts like least privilege, segregation of duties, and the maker-checker process still remain and will continue to adapt.”

Love's playbook prioritizes a frank assessment of an organization's use case for agentic AI and its tangible impact on the business. Before enforcing rules, leaders can conduct a rigorous three-step evaluation that spans multiple operational layers. The first is an assessment of high-level governance and on-the-ground deployment, or the "people and processes" approach, which addresses questions about technology evaluations and human processing. This is a question of GRC. Second, organizations need to address the identity layer, understanding how access is granted or denied to human and AI agents. This includes how the organization implements measures such as least privilege and segregation of duties. Finally, the third layer is what Love calls the "human loop," ensuring any new tool solves a real problem while keeping a human in the driver's seat.

  • The three-step test: Love is confident that enterprises that work through this three-layer approach are better positioned to understand costs and risks. “Once you define the use cases, you must map them to your current processes to find the gaps. From there, you can determine the real savings and ask if it saves time, saves cost, or allows you to use fewer people."

With this framework in place, a key practical enforcement aspect is securing agentic AI through identity and access management, drawing a hard line between tasks fit for automation and those that demand human judgment. Drawing that line is where organizations must balance the promise of agentic efficiency against the risk of error from either poorly governed agents or sophisticated insider threats. Love explains that the decision hinges on an organization's risk appetite and a sober assessment of the use case, a process that requires updated AI security threat modeling and will ultimately drive the need for broader AI agent standards.

  • A hard line: Love is emphatic that there are just some permissions and roles that should never have an AI agent manning them. “You cannot have an agent with privileged, root-level access to a server. That requires a human and a maker-checker process. Period.”

  • Agent's work: Conversely, there are some rote tasks and manual labor that AI agents should absolutely be doing. “But for low-end, mundane tasks like user provisioning and deprovisioning, those activities can and should be performed by an agent as part of a baked-in workflow,” he says. While there's still some risk in these roles, it's relatively low and can be weighed against significant efficiency gains.

The story of AI agent adoption appears to be unfolding as a deployment narrative shaped by context, with the pace often set by industry-specific realities. The narrative suggests the market is moving past hype and into a phase of applied execution. Love points to "highly regulated financial services" with "huge budgets" as the clear early adopters, while legacy-heavy industries like oil and gas are moving more cautiously. Even some federal agencies are accelerating adoption. Such a pattern, he notes, is not new.

  • A familiar playbook: For Love, the adoption cycle for new technologies is somewhat predictable, and AI gents aren't that much different. “We saw a similar wave with SASE solutions, where initial adopters created momentum for the larger market. The same thing happened with DLP. It started with a few use cases, the larger group followed, and it became the de facto solution.” Love supports this by noting that the market is already moving past the exploratory phase, and "real-life problems" are being solved.

This shift is crucial, according to Love. The move from hype to exploration to deployment is one in which adoption speeds up, which is why he describes a six-month burst. It represents a major change that's going to bring this new technology into focus with the expectations and risks of any other new technology. And, as such, this adoption demands a disciplined, risk-based approach rooted in mature GRC and identity management. “It is not in the hype space. And in a three- or maybe six-month horizon, we will see real capabilities translate into business benefits."

Related content

Enterprises Take Accountability for Agentic AI Through Explicit Risk and Access Decisions

Quinten Steenhuis, Co-Director of the Legal Innovation and Technology Lab at Suffolk University Law School, discusses the impact of agents on existing infrastructure, and how that impact will push rapid changes in governance, accountability, and ethics in AI.

Agentic AI Delivers Value When Teams Limit Scope, Enforce Controls, and Track Every Action

Abby Morgan, AI Research Engineer and Developer Advocate at Comet, says teams succeed with agentic AI by controlling operations and supervising critical steps to ensure reliable results.

Enterprises Bring Order to AI Chaos By Defining Ownership and Security Accountability

Max Heinemeyer, Global Field CISO at Darktrace, says organizations are confronting a chaotic AI landscape, where unmonitored systems and opaque processes demand proactive oversight to reduce risk and maintain operational integrity.

You might also like

See all →

‘The Last Six Months Have Changed Everything’: How Security Experts are Governing AI in Real Time

Enterprises Take Accountability for Agentic AI Through Explicit Risk and Access Decisions

Agentic AI Delivers Value When Teams Limit Scope, Enforce Controls, and Track Every Action

Powered by Island.
ISLAND, All rights reserved ©