• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Calendly CISO Says Threat Modeling Is Key to AI Security Fundamentals

Island News Desk
November 23, 2025
CXO Spotlight

Yassir Abousselham, CISO at Calendly, explains how to master AI security with threat modeling and risk-based analysis.

Credit: Outlever

For the vast majority of companies consuming LLMs, the challenges today mirror the exact same issues and practices as every major technological transformation over the last few decades.

Most security leaders are struggling with the accelerated and often chaotic push to deploy AI. Overwhelmed by white papers, vendor claims, and lab-based research, many can't distinguish between theoretical security risks and real-world threats. Now, organizations risk wasting money on "just in case" solutions for problems they don't actually have.

So how does one separate real risk from speculation? For Yassir Abousselham, the Chief Information Security Officer at Calendly, the key lies in a structured, disciplined approach. Having built and led cybersecurity programs at prominent tech companies like Splunk, Okta, and Google, Abousselham has firsthand experience navigating multiple waves of technological change throughout his career.

"For the vast majority of companies consuming LLMs, the challenges today mirror the exact same issues and practices as every major technological transformation over the last few decades," Abousselham says. From his perspective, the issue often stems from a lack of distinction between applicable threats and theoretical ones.

  • Know your role: "Risks like data poisoning, model theft, and model inversion apply to frontier model companies or companies that specialize in fine-tuning models," Abousselham explains. "But they do not apply to organizations consuming LLMs through an API."

Drawing on the philosophy presented in his 'Not Your Problem' essay, Abousselham poses a foundational question: Is the organization an AI producer or an AI consumer? By revealing AI as just another expansion of the existing attack surface, the distinction can help demystify security. "The real threats are the boring ones," he says.

  • Security's greatest hits: Ironically, some of the most fragile links in the chain are often the same failures in basic security hygiene the industry has been facing for decades, Abousselham explains. "If we don't validate user input, attackers can gain access to information they shouldn't. If we're not segregating user sessions, one session can bleed into another and expose PII. If we're not right-sizing the authorization for AI agents, tools, or users, that can lead to data exposure. If we don't filter sensitive information from logs, that data can be leaked. The risks are more of the same. These are issues that we have dealt with historically as part of mainstream security."

  • Show me the breach: Meanwhile, the lack of real-world AI attacks supports his point. "We have not heard of a major data breach that was caused by the adoption or the consumption of a reputable, large language model." Perhaps this observation should prompt a reassessment of where leaders focus their attention, Abousselham suggests.

Drawing a direct parallel to the cloud migration era, he describes how organizations obsessed over exotic threats often overlooked the realities of secure configurations. Then, the answer was mastering the shared responsibility model. Today, he says, the solution remains the same: the provider should be responsible for the AI model.

  • Not your problem: For Abousselham, this helps to clarify where a CISO’s control begins and ends. "A failure in the model's guardrails is equivalent to a failure in a cloud provider. If Claude, Gemini, or ChatGPT were to experience a major failure, it would be an industry-wide issue. That situation is beyond our control as consumers. It is the responsibility of the technology providers to fix."

  • Model behavior: Instead of a new AI framework, however, Abousselham offers a specific mandate for every new use case. "If there is one action I would invite my fellow CSOs to take, it is this: require a threat model for all new AI use cases," he proposes. "Threat modeling puts structure around the problem and maps attacks to your specific architecture. The focus must be on ensuring that the threats the team is examining are applicable and realistic. That is how we focus our investments on the areas that need it most."

Beyond immediate technical threats, Abousselham flags a forward-looking risk at the intersection of cybersecurity and law. "When an LLM provider trains on an organization's data, IP questions arise that are not yet fully addressed by law, despite frameworks like the EU AI Act," he says. "If sensitive corporate data enters the model's training corpus, does the enterprise still own that information, or does it become the property of the LLM provider once converted into model weights? We need to pay very close attention to this."

As a result, the CISO's role must expand into a necessary partnership with the legal team, Abousselham concludes. Security leaders must distinguish real threats from theoretical ones to stay ahead of the risks. "This transformation is a lot faster than previous ones I've had a chance to manage. You cannot afford to take your eye off the ball. We have to be constantly discussing and challenging our assumptions because this field is moving so fast."

Related content

Agentic AI Browsers Are Rewriting the Rules of Information Discovery and Trust

Firas Jarboui, Head of Machine Learning at Gorgias, explains how to secure Agentic AI browsers by gating actions and segregating context from workflows.

AI Browsers Need Real Oversight to Earn Enterprise Trust

Mikhail Vasilyev, a Principal Software Development Engineer at Workday, explains why AI browsers need strict visibility, containment, and auditability before enterprise use.

Veeam's EMEA Field CISO on Ungoverned AI Browser Risk and Finding Resilience Through Governance

Andre Troskie, EMEA Field CISO at Veeam Software, explains how unifying security, governance, and recovery creates end-to-end AI resilience for the modern enterprise.

You might also like

See all →
Veeam's EMEA Field CISO on Ungoverned AI Browser Risk and Finding Resilience Through Governance
Calendly CISO Says Threat Modeling Is Key to AI Security Fundamentals
How Microsoft’s Gaming CISO Levels Up Security for the New AI-Powered Era
Powered by Island.
© ISLAND, 2025. All rights reserved