Back to New Tab
As Browsers Gain Agency, Security Focus Moves From User Behavior To Access Control
Enterprise Security
William May, Executive Client Director at altitude80, views agentic browsers as familiar security challenges intensified by speed and access, where permissions matter more than interfaces.

It comes down to standard cybersecurity: Who has access? Why do they have access? When do they have access? It's your whole identity access management.
Enterprises are no longer just protecting people as they click through the web. They’re protecting agentic browsers that can search, decide, and act on a user’s behalf. As tools and threats evolve together, the line of defense shifts away from the screen and toward the permissions behind it. Identity and access management now define the perimeter.
William May is an Executive Client Director at altitude80, where he advises enterprises on large-scale digital transformation, operating model design, and technology strategy. With more than 30 years of experience leading high-stakes initiatives at global firms including IBM, PwC, and Tata Consultancy Services, May has spent his career helping organizations adapt as new tools reshape both opportunity and risk. May sees agentic browsers not as a new category of threat, but as a familiar security problem concentrated into a more powerful endpoint.
"You're only as secure as you are at this moment; you don't know what the next thing is. So you always have to be very diligent about looking for ways you could be exploited. You can never rest on your laurels when it comes to cybersecurity," says May. He follows this principle himself, advocating for disciplined adoption. He admits to being hesitant to install early versions of agentic browsers, preferring to "let somebody else be the testbed."
Elementary, Watson: His caution highlights a key distinction: the technology is powerful, but it isn’t perceptive. "AI today is not truly AI. It's Watson from the IBM days, just way more powerful. Even with the AI code we use today, where we have patterns and prompts that are well laid out, we still get hallucinations," notes May. "We still get bad data because the data out there can be bad, and the AI isn't able to determine whether one source or another has good data unless you tell it."
The call is coming from inside the house: The arrival of agentic technology exposes familiar vulnerabilities, especially those related to human behavior and accidental exposure—a problem that becomes particularly sharp in the "Bring Your Own Device" era. When organizations can't control the hardware, security relies heavily on stringent access rules, often managed by solutions like a dedicated BYOD workforce browser. "A lot of the risk is usually internal, and it's not usually malicious," May explains. "People are doing what they think is okay, and they leave the door open. A lot of this is education and awareness that's being driven and evolving year over year.”
In many ways, this is a familiar story. May recalls his time at IBM, where tools like Dropbox and WhatsApp were forbidden to prevent client data from being stored on external systems. The same principle of containment applies today. He points to a sophisticated hack where threat actors used Anthropic's Claude by posing as an ethical hacking firm to bypass its guardrails. The incident shows how AI can be deceived by social engineering tactics, including newer forms like prompt injection attacks.
The situation fuels a cycle where AI is used for both attack and defense, putting a premium on speed and scale. As May notes, companies need systems that can spot patterns faster than a human team watching a "room full of screens." But this push for more tools can backfire by creating a messy, over-engineered system—a vulnerability in itself.
Seventy-five problems: "In the cybersecurity domain many clients can have upwards of 75-plus tools, many of them redundant and used in silos," he says. "That's got to go the way of the dodo, because anybody with a good AI tool could find a situation like that, identify the weak spots very easily, and operate under the radar for a long time."
Who, why, when: The answer, then, isn't another tool, but a move toward fewer, better-architected systems with clear access rules, such as a hardened enterprise browser. It means a return to mastering the fundamentals. "It comes down to standard cybersecurity: Who has access? Why do they have access? When do they have access? It's your whole identity access management," explains May. "Where is your information? How is it secured? The same thing goes for every stakeholder in your supply chain."
Yesterday's news: The short half-life of any security assessment suggests that in such a volatile environment, strategy must move beyond periodic check-ins and toward continuous adaptation. "If I do an assessment for a company, as soon as I give it to them or as soon as I finish that assessment, it's old. It could be outdated. That's how fast this moves. You need to have people and systems in place that are constantly evolving, preparing for, adapting to, and responding to threats."
Ultimately, the goal isn’t to sideline human judgment, but to sharpen it. AI can move faster and operate at a scale no team can match, but it can’t decide what truly matters. For decisions that carry real consequence, May insists the final check can’t be automated. "Whenever AI gives you something, especially when it’s critical, you need to have somebody looking at it with a thoughtful eye," he says. "You need to be able to look at it through different lenses and validate what you’re seeing or what you’re about to do."


