Back to New Tab
How Higher Education Puts Boundaries Around AI Agents With Sanctioned Access Models
AI
Vijay Samtani, CISO at Cambridge University, discusses how blocking AI agents is a losing battle for security leaders. Their best course of action is to build clear rules and guidelines for AI access to control vulnerable surfaces.

Trying to block or ban AI agents is an arms race, and in security, we almost never win those.
The views and opinions expressed are those of Vijay Samtani and do not represent the official policy or position of any organization.
AI agents are expanding the attack surface within higher education faster than security models can adapt to. Systems built for human interaction are now being accessed, queried, and acted on by automated agents, introducing risks that traditional controls were never designed to handle. In response, security leaders are shifting their approach, focusing less on blocking AI activity and more on shaping it through structured, sanctioned access that aligns with how these systems actually operate.
We spoke with Vijay Samtani, Chief Information Security Officer for the University of Cambridge. A senior information security leader with over 25 years of experience, Vijay has a track record of resolving major cyber incidents and architecting security for massive organizations, including the London 2012 Olympics and The Royal Mail Group. He cautions that organizations attempting to fight AI agents are setting themselves up for an unwinnable battle: “Trying to block or ban AI agents is an arms race, and in security, we almost never win those," says Vijay.
Instead, he sees a role for security and governance in shaping how AI interacts with systems. In practice, AI is having a major impact on higher education. Its widespread adoption by students and faculty suggests that the technology is gaining a foothold organically, rather than through top-down programs. Now, because the field is moving so quickly, attempting to control it through policy alone is proving difficult.
Already everywhere: AI is now embedded in everyday academic workflows, with adoption driven largely by ease of access. As Vijay notes, "The use of AI tools by students is pretty much universal. It's so easy to download an app to your phone or desktop and ask it questions. We know from surveys that almost all students now use AI."
Literacy on the fly: Accordingly, Vijay recognizes that ease of use enables rapid adoption, even at scale, and universities aren't able to keep pace with this organic growth. "The majority of people are becoming literate by doing. It's a field moving so quickly that even the most forward-thinking universities have probably had only six months to get their heads around how AI agents might impact their environment, and even in six months, things have changed dramatically. We're all playing catch-up."
The result is a new headache that moves the security focus away from well-understood programmatic APIs and onto the less predictable perimeter of human-facing agents. For Vijay, this mismatch poses a clear risk, necessitating a new design philosophy that accounts for the scale of vulnerabilities that can emerge in this new AI landscape. Without that philosophy, the sheer volume of automated agents could cause public services with security concerns to "melt under the pressure."
Accidental API: When it comes to these new attack surfaces, Vijay is clear that any place that includes user input is a vector. "What's new is that human-readable stuff has now become an API. A web form that you'd expect a person to fill in can now be filled in by an AI agent just as quickly as any human could." With these agents, even spaces that were considered human-only are now fair game for exploitation.
So what's the fix? Vijay says that instead of just blocking agents, organizations should manage AI agents by creating purpose-built access models. This is predicated on the notion of access controls, where security professionals and architects don't leave complete control of agent activity to users. The idea isn't entirely new, mirroring established real-world structures, but current technology is still playing catch-up with dealing with AI agents. Vijay discusses the relationship between a reasoning LLM and access limitations, using an email agent as an example. Organizations might rely on the LLM and a good prompt from the user to protect their email, but a better way might be through architecture: "It might be better if you get the architecture right, to say, 'You can't delete my emails because you have read-only access, and that's all.' That's enforced in code we trust, rather than in LLM responses."
Piracy to payment: Vijay points to another innovative tech edge-case from decades prior—music downloading. "There was a period of two or three years when it looked like everyone was downloading MP3s illegally. The music industry got really upset about this, calling it piracy and theft. And then along came Apple Music. Yes, you had to pay, but everyone just said, 'That's cool. I don't really want to be a criminal. If I have to pay to get an MP3, I'll pay, and I'll use a service that makes life easy.'"
At the end of the journey is a system that provides AI with clear usage rules and a clear path to adopt new technology. Rather than look to forbidden means to access tools, users will take the legal and ethical path if it is available to them. And for Vijay, this path is one in which organizations in higher education can shape how AI is used without taking it on in a punitive or reductive manner. "As soon as there is a safe and sanctioned way to do something, then the illegal and noncompliant stuff just falls away. But don't do the arms race, because you'll lose."

