Back to New Tab
AI Browsers With Memory Spark Concerns Over Data Exposure And Human Error
Enterprise Security
Andres Andreu, CEO of Constella Intelligence, warns that AI browsers with memory create psychological and technical risks that demand stronger scrutiny and disciplined adoption.

The risk is tremendous. If something is poisoned at the model level and begins offering nefarious suggestions, you're no longer dealing with simple errors. You're dealing with human psychology in a social engineering context.
The remarkable convenience offered by a new class of AI-powered browsers with memory comes with a significant trade-off: security. Enterprises are exploring the technology, yet hesitation persists as leaders try to understand how the browsers function, what data they retain, and where that information is ultimately stored. While the appeal is clear, the risks remain unsettled.
Andres Andreu, CEO of identity risk intelligence leader Constella Intelligence, brings a clear-eyed view to this new terrain. A global technology and cybersecurity executive with more than three decades of experience, his perspective reflects a career that spans leading security at Hearst and 2U and architecting secure systems for the Drug Enforcement Administration. In an era defined by AI-driven risk, he believes leaders need pragmatism, discipline, and a healthy dose of paranoia.
"Once these browsers start suggesting actions, people will follow because they get mentally lazy. The risk is tremendous. If something is poisoned at the model level and begins offering nefarious suggestions, you're no longer dealing with simple errors. You're dealing with human psychology in a social engineering context," says Andreu. He explains that beyond the technical risks, the convenience-oriented design of these tools can exploit a common human tendency toward complacency, which can condition users to lower their guard and create a new, scalable vector for manipulation that technology struggles to solve on its own.
The malware multiplier: That vulnerability can become a significant security risk when combined with a specific class of malware that is already rampant. "The combination of AI browsers with memory and infostealers is borderline terrifying," Andreu states. "Imagine with the memory and context an AI browser can hold, how much more an infostealer can get. Even if you told me the state is maintained online, that means calls are happening under the hood. An infostealer, or software that it facilitates, can potentially intercept those flows and start monitoring network traffic."
The human override: To illustrate the gap between even the most advanced defenses and human action, Andreu points out that his company's analysis shows the majority of infostealer infections occur on machines that have EDR protection from the industry's best vendors. "That tells you that no matter how good the EDR software is, the human took an action and overrode whatever the EDR was doing," he explains. "The error was the human action."
So what’s the fix? Andreu suggests the market will likely turn to specialized tools to manage the new attack surface, positioning enterprise browsers as the emerging solution. "I think enterprise browsers are going to become the de facto standard," he says. "They raise the bar on the security side by scrutinizing traffic, which means performing legitimate man-in-the-middle inspection and breaking TLS streams to inspect content."
No magic bullets: But Andreu stresses that effective oversight starts with genuinely understanding the technology. "There's no automagical way of doing this. Unless you understand the tech by setting up a lab and watching network flows, you're operating blind." He notes that teams also need visibility into what happens at runtime in memory, since even well-designed protections at rest do not cover every scenario. As he puts it, "when API keys are in memory, they’re unprotected. That's a different level of exposure."
Accountability theater: He's skeptical toward third-party assurances, believing that organizations must ultimately take verification into their own hands. Accountability, he notes, is often an illusion. "Who's ever pointed the finger at Microsoft for their security blunders? Nobody. And we still continue to use their software," Andreu says. "How do you know that that thing isn't exfiltrating your data as we speak? You don't."
Across the industry, responsibility for AI driven tools is still evolving, and oversight often lags behind adoption. Leaders face pressure to move quickly, even when the guardrails are still being defined. This gray zone makes internal validation and clear governance especially important as enterprises experiment with new workflows.
A case for paranoia: In response to these combined technical and human vulnerabilities, Andreu says organizations should cultivate a culture of vigilance. "I've spoken publicly about this notion I've labeled 'healthy paranoia'," he explains. "Making people aware of the reality of what you're up against is absolutely vital, without resorting to fear-mongering."
Cool factor vs. cash: At a strategic level, he stresses that any adoption should be justified by clear business value, setting a standard where need outweighs hype. "You can't make business decisions based on the cool factor. You have to justify what a new tool will give your organization that it doesn't have today. If you don't have a good answer, don't get into it because it's too risky."
That accountability gap, Andreu says, fuels a fantasy in the security world: the idea that executives will formally accept risk. He concludes with a sobering lesson in corporate reality for security leaders, arguing that in the C-suite, plausible deniability is a powerful force that often prevails. "Plausible deniability is the friend of a business executive, not yours as a CISO," he says. "Show me one CEO that will sign their name to risk. No business executive in their right mind will do it."


