• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

As AI outpaces reactive security, the lack of secure-by-design code poses serious risks

Island News Desk
September 18, 2025
Enterprise Security

Lyft's Anshuman Bhartiya discusses the need to integrate security from the start, not just at the pull request stage.

Credit: Outlever

We’ve been talking about ‘shifting left’ for years, but the truth is, we’ve only shifted as far as the pull request. By the time an engineer has written and committed the code, it’s already too late. We've been stuck in a reactive world, finding vulnerabilities after the code has been shipped and used by customers, instead of focusing on building secure things from the ground up.

AI is accelerating code delivery tenfold, yet security remains an afterthought. The breakneck pace of innovation has turned traditional security playbooks from obsolete to dangerous, creating exposure the industry can’t afford.

Lyft's Staff Security Engineer Anshuman Bhartiya is focused less on the threats AI might bring and more on the ones teams are sleepwalking into. As code velocity skyrockets, he says the real danger isn’t novel attacks, but the industry’s failure to rethink how security gets done.

  • Left out: “We’ve been talking about ‘shifting left’ for years, but the truth is, we’ve only shifted as far as the pull request,” Bhartiya says. “By the time an engineer has written and committed the code, it’s already too late. For the longest time, we've been stuck in a reactive world, finding vulnerabilities after the code has been shipped and used by customers, instead of focusing on building secure things from the ground up.” Most teams are just scanning code after it's written, not designing security in from the start.

  • Security groundhog day: “We play this game where we see the same kind of vulnerabilities, just in different variations, again and again. We aren’t learning from our past failures,” Bhartiya warns. Legacy systems compound the problem, leaving organizations that aren’t managing infrastructure as code fundamentally unprepared for the speed of AI. “The velocity at which things are going to get built and deployed is just going to skyrocket now,” he adds. “It's going to become very difficult to deploy things securely.”

The challenge isn’t just the volume of code, but the tools themselves. As autonomous agents are given more power to execute code and integrate with internal and external services, they create entirely new attack surfaces. “What does the blast radius look like for these integrations? Are these systems following the principle of least privilege?” Bhartiya asks. “There are some concerns around authentication and authorization with respect to how these agents are going to work.”

  • Embedded allies: One way to build a proactive security culture today, Bhartiya suggests, is through a hands-on, human-powered approach. “It’s this idea of a security champions program, where you have someone with security expertise embedded directly into an engineering team,” he says. He notes that having a security-minded person attend daily scrum calls to understand how things are built is a powerful way to foster trust, though the model can be resource-intensive. “Just doing that is seen as an effort from the security teams to get more involved, more embedded.”

  • An empathy engine: While direct human partnership is key, Bhartiya believes technology can be used to scale one of security's oldest challenges: the people problem. “It sounds weird, but we should explore using agents to help with stakeholder relationships,” he suggests. “Security engineers, in my experience, can struggle with empathy and understanding the pain points of other teams. What if we built a system that understands how our organization works and can communicate in a way that aligns with our culture? You want a culture where people proactively come to security, not the other way around.”

Related content

Agentic AI Browsers Shift the Security Focus to Cultural Vulnerabilities

Joseph Sack, CEO of Smart Tech Solution LLC, explains why the primary security risk for agentic AI browsers is human behavior and how defensive AI tools can help.

Agentic AI Browsers Are Rewriting the Rules of Information Discovery and Trust

Firas Jarboui, Head of Machine Learning at Gorgias, explains how to secure Agentic AI browsers by gating actions and segregating context from workflows.

AI Browsers Need Real Oversight to Earn Enterprise Trust

Mikhail Vasilyev, a Principal Software Development Engineer at Workday, explains why AI browsers need strict visibility, containment, and auditability before enterprise use.

You might also like

See all →
AI Browsers Need Real Oversight to Earn Enterprise Trust
Island's Solutions Engineering Director on Overcoming Resistance to Public Sector Modernization
AI and the Evolving Face of Social Engineering: A Call for Smarter, Connected Defense
Powered by Island.
© ISLAND, 2025. All rights reserved