Luke Mulks, VP at Brave Software, explains why autonomous AI agents in browsers demand an entirely new security playbook.

Everybody loves the promise of AI browsers making life easier. But most people have also been conditioned to give data access and permission without thinking.
The long-standing habit of granting user permissions without a second thought is colliding with the new reality of agentic AI. But the issue is already happening at scale. With the vast majority of employees using AI with little company oversight, it's creating what many are calling "AI sprawl": an extensive ecosystem of ungoverned tools.
For an expert's perspective, we spoke with Luke Mulks, the VP of Business Operations at Brave Software. A core member of the Basic Attention Token team and the host of the "Brave Technologist" podcast, Mulks has spent years on the front lines of browser privacy. Today, he sees AI browsers as the next frontier for enterprise cybersecurity.
“Everybody loves the promise of AI browsers making life easier," Mulks says. "But most people have also been conditioned to give data access and permission without thinking." Beyond granting access, authorization introduces new complexities, he continues, especially when systems can act independently.
"With agentic tools, it is more than giving data. They are actually authorized to do things now, which makes the impact far more severe,” Mulks says. But agents running in the browser? That can pose far greater risks.
Attorney-client privilege: The browser is like the control plane for a user's digital life, Mulks explains. "Meanwhile, people are treating ChatGPT like a therapist or a lawyer. But that information can be surrendered to a government authority or exposed in a hack. Giving such permissive access has consequences."
The local context problem: Because it has a large "blast radius," securing this plane requires end-to-end resilience. "The browser is different," Mulks continues. "It has context for everything. It has local storage, all your local history, and your location. People don't think about these things because they expect the browser to work for them." If compromised, that context could reveal a rich repository of sensitive information.
The problem is made worse by a convergence of corporate hype and regulatory lag, according to Mulks. Today's top-down mandate to integrate AI is being met with a general lack of market fit.
Parlor tricks: Most companies are experimenting with AI demos, Mulks says. But many of those new tools require sweeping access. "Most of what you see right now is parlor tricks, like an agent that can make a tweet. But it often takes longer than doing it yourself. A lot of it is showmanship, not rubber meeting the road."
Reading the fine print: Now, it's creating a significant governance gap for many organizations, Mulks explains. "A company's policies aren't a description of the current product. They describe what the company could do with your data. A privacy policy from four years ago was written for that time. So, the data can be used in ways you aren't thinking about today."
With regulators still playing catch-up, many believe that the responsibility for protecting users lies with the people building the technology. However, successfully managing this risk often involves a major cultural change—one that embeds legal and privacy teams in the development process from the very beginning.
The fun police: "There is often internal resistance to bringing security and privacy teams in early," Mulks says. "People worry their project will get picked apart, and they won't get things done. But if you bring the right people in early, they won't just say no. They will offer solutions to make the project workable and find a middle ground." Such a user-first approach elevates trust into a core business metric, he continues. Here, growth becomes closely linked to keeping promises.
Prioritizing "showmanship" over security has also led to the emergence of entirely new, scalable threats, Mulks explains. One clear example of this new risk is prompt injection, a novel class of vulnerability that exploits an agent's ability to interpret hidden content.
The invisible threat: Brave's own research on "unseeable" prompt injections showed the danger is real and that the vulnerability is widespread. "We found that people could put text on a page that's the same color as the background. To a human, it's unreadable. But to a machine, it's perfectly readable and could command the agent to do things."
Old threats, new tricks: Now, even legacy web technologies are being adapted for new exploits. "With agentic AI, you have to re-evaluate every single threat you know about, because an agent can manipulate existing threat vectors in new ways and with an efficiency you've never seen before."
For Mulks, reframing agentic workflows as a new and distinct privileged identity type is the most logical next step. Agents require an entirely new identity framework, Mulks continues. "A more scalable model is a manager of different agents, where you delegate specific tasks. Each agent would have only the specific permissions needed to do its job, and they wouldn't interoperate without special access. That's far less risky than giving one assistant access to your whole life."
Ultimately, no matter how you slice it, the path forward calls for a healthy dose of pragmatism. "Don't try to eat the whole watermelon in one bite," Mulks concludes. "Pick it apart. These are powerful super tools, so use them in an applied, piecemeal way to get a better sense of how they work and where things could go wrong."

