• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

AI Browsers Need Real Oversight to Earn Enterprise Trust

Island News Desk
November 23, 2025
Enterprise Security

Mikhail Vasilyev, a Principal Software Development Engineer at Workday, explains why AI browsers need strict visibility, containment, and auditability before enterprise use.

Credit: Moor Studio (edited)

It may be fun to run AI-based browsers, but it is extremely dangerous to trust them with anything important or meaningful.

The views and opinions expressed are those of Mikhail Vasilyev and do not represent the official policy or position of any organization.

Browser security is not new, but AI agents with deep access to files, APIs, and system tools are creating a new class of security challenges. These intelligent agents can read, write, and act on a user's behalf, turning the browser from a passive window into an active participant in business operations. In this new context—where applications are often opportunistic and probabilistic—longstanding security models are now being challenged.

For an expert's perspective, we spoke with Mikhail Vasilyev, a Principal Software Development Engineer at Workday. With a PhD and decades of experience as an assistant professor in computational mathematics at the Moscow Institute for Physics and Technology, Vasilyev also spent time as a Production Engineer at Facebook before his current role. For Vasilyev, the widespread adoption of these powerful tools overlooks their immaturity.

Now, AI browsers are creating a level of risk that many organizations may be unprepared to manage. "It may be fun to run AI-based browsers, but it is extremely dangerous to trust them with anything important or meaningful," Vasilyev says. "Using an AI browser to grab cat pictures is probably okay, but trusting it with your bank account or the ability to take a loan in your name is a significant risk."

  • A primitive state: First, Vasilyev likens the current environment to the early 1990s. "If we speak about LLM-based applications, we are in the era of Windows 3.1 in terms of security. It is opportunistic: if your application receives good data, it will probably do the job. However, if it receives malicious data, too much data, or has bugs, it is very easy to trigger it into various crazy modes of operation."

  • An unpredictable state: Like early operating systems, AI is prone to erratic behavior when exposed to malformed data or carefully crafted prompts. "It only took me a couple of evenings to get a recent, open-source, 400-billion-parameter model on AWS to suddenly start responding in Chinese without being asked," he explains. "To me, that proves that too much context or a carefully crafted prompt injection can push a model into an unpredictable state."

For Vasilyev, significant risk stems from the "fundamentally probabilistic" nature of these models. That led to his core architectural point: the fundamental difference between a safe AI agent and a risky one lies in its interaction surface. "Monitoring a virtual user clicking buttons in a web browser is an order of magnitude more complex than catching an API call," he explains. "In the latter case, you have a clean, machine-readable expression of intent, compared to a visual one."

  • A public problem: Visually navigating a browser is vastly more challenging to monitor than a controlled, software-based method, he continues. "Your security team is never going to approve uncontrolled interaction with the external world. If you're a big public company and it's revealed you're doing something that isn't security-approved, you'll probably find yourself in trouble."

However, because this instability is inherent to the technology, Vasilyev explains, technical controls alone are unlikely to be enough. Instead, the solution lies in training employees' defenses through controlled exposure. "Security training for personnel is like an immune system," he says. "An immune system can go crazy if it is not triggered, but it can be trained. That’s why you need to create a certain level of exposure to realistic threats, like a team that sends phishing emails from time to time."

  • An audit trail: Another complication is what Vasilyev describes as a "governance gap." With agentic AI browsers appearing at scale only months ago, the market has not had time to produce mature frameworks for deployment. In this vacuum, one countermeasure is to make their actions thoroughly auditable. "If something goes wrong, any reasonable organization will need to conduct a retrospective," Vasilyev says. "That's why, for enterprise applications of this kind, I expect very serious attention to tracing and logging every decision point and every input that informed those decisions."

But that audit trail must be tailored, Vasilyev continues. For instance, an engineer might need to see granular agent activity, while their manager requires an aggregated summary of what the agents are doing, creating a hierarchy of insight. "If you can't predict a system's behavior, you must be able to record it," he adds.

Ultimately, AI browsers have immense potential, Vasilyev concludes—at least, when they're deployed correctly. For him, sandboxed testing platforms—where an AI simulates an accountant to find bugs in an application—are the perfect, bounded use case. It's the enterprise equivalent of using an AI for "cat pictures," he explains: the value is measurable and the risk is contained. Outside of that sandbox, however, the stakes are higher. "When it comes to trusting agents with real corporate money, somebody will probably benefit from it. And somebody will blow their company on it."

Related content

Agentic AI Browsers Are Rewriting the Rules of Information Discovery and Trust

Firas Jarboui, Head of Machine Learning at Gorgias, explains how to secure Agentic AI browsers by gating actions and segregating context from workflows.

Veeam's EMEA Field CISO on Ungoverned AI Browser Risk and Finding Resilience Through Governance

Andre Troskie, EMEA Field CISO at Veeam Software, explains how unifying security, governance, and recovery creates end-to-end AI resilience for the modern enterprise.

Island's Solutions Engineering Director on Overcoming Resistance to Public Sector Modernization

Shawn Surber, Solutions Engineering Director at Island, explains how to overcome resistance to public sector IT modernization with a human-centric strategy.

You might also like

See all →
AI Browsers Need Real Oversight to Earn Enterprise Trust
Island's Solutions Engineering Director on Overcoming Resistance to Public Sector Modernization
AI and the Evolving Face of Social Engineering: A Call for Smarter, Connected Defense
Powered by Island.
© ISLAND, 2025. All rights reserved