Nethusha Ravisuthan, Sales Support and Operations Manager at Microsoft, argues that Shadow AI, departmental silos, and ungoverned AI agents are compounding enterprise risk, and that operational trust and holistic system resilience must become foundational to AI deployment.

"Enterprises must think in terms of holistic system resilience, not just departmental ownership. What holds everything together is as important as the parts themselves."
Enterprises are adopting AI tools across every department, from HR applicant tracking to financial analytics to marketing automation. But most organizations still treat the security of those tools as someone else's problem. The result is a growing category of risk that cuts across every team and every workflow: employees using unapproved AI tools outside the corporate ecosystem, feeding confidential data into systems the organization does not control.
Nethusha Ravisuthan is a Sales Support and Operations Manager at Microsoft, where she drives enterprise product adoption across Southeast Asian markets. With a background in cybersecurity and digital forensics and prior operational leadership roles at HCLTech, Ravisuthan works at the intersection of AI enablement and data governance, pushing customers to adopt AI tools while helping them understand the security obligations that come with it.
"Enterprises must think in terms of holistic system resilience, not just departmental ownership. What holds everything together is as important as the parts themselves." The core problem, Ravisuthan argues, is that most companies still operate AI adoption in departmental silos. Security sits in one corner, operations in another, and individual teams make their own decisions about which tools to use. That fragmentation creates exactly the conditions where Shadow AI thrives.
In the shadows: Employees gravitate toward whichever AI tool they find most convenient for a given task, regardless of whether it falls within the approved stack. "Someone might prefer one tool for research and another for writing, and then they upload confidential data into a system the company doesn't control," Ravisuthan explains. "If that tool is compromised on the same day, the confidential information goes out to the world. That can cause a drastic impact on the organization."
Compounding damage: A breach through an unapproved tool does not just affect the employee who used it. It hits the company's reputation, its partnerships, and its bottom line. "Let's say a company partners with one vendor but an employee uploads documents into something else, and a data breach happens through that other tool," Ravisuthan says. "The reputation damage comes back to the company and the partner they are associated with. It doesn't limit itself to one part of the entity."
The risk compounds across departments because AI tools now touch data in HR, finance, marketing, and operations simultaneously. Applicant tracking systems ingest personal data. Financial models process sensitive numbers. Smartphones collect biometric data from fingerprints to facial recognition to iris scans. Each department generates its own exposure, and when one breaks, the consequences are enterprise-wide.
Revenue risk: "If there is a miss with the data or a cyber attack happening inside the premises, that affects everything the company holds. It can affect revenue in a major way," Ravisuthan says. She points to IBM research estimating global average data breach costs at $4.4 million, a figure she ties directly to the kind of third-party tool exposure that Shadow AI introduces.
Find the source: Rather than blanket bans on external tools, Ravisuthan advocates for organizations to surface the needs driving Shadow AI in the first place. "Talk to your IT admin, convey feedback to your vendors, and make sure you can get something that meets your requirements," she says. "Whatever the customer brings, we try to build something out of it and make sure they have a secure environment."
The conversation shifts when AI agents start representing organizations directly. Chatbots, cold email agents, and automated workflows increasingly serve as the front line of customer interaction, making decisions and delivering responses before a human is ever involved. That changes the trust equation.
The AI brand rep: "AI agents are representing a part of your organization. You should be able to trust them and give them the knowledge they need to represent your brand, because they are the front runner before it gets to the human," Ravisuthan says. "Operational trust should be mandatory. It should be a part of deploying AI, and it should be monitored with the same rigor you would apply to a human employee."
The gap between AI adoption speed and organizational readiness is widening, particularly in markets where enterprises are still transitioning from traditional operating models. Ravisuthan sees this firsthand across Southeast Asia, where governments and private enterprises are digitizing rapidly but governance frameworks lag behind the tools they are deploying.
The enterprises that navigate this well will not be the ones with the best technology stack. They will be the ones that treat operational trust as infrastructure, not an afterthought. "People must understand that security is important, privacy matters, and they have to understand what holds everything together and what part of the bridge can collapse to make everything fall apart," Ravisuthan concludes.

