• CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Operational Leaders Turn AI Anxiety Into Adoption By Designing For Safe Experimentation

New Tab News Team
April 28, 2026
AI

Meyyammai Valliyappan, Technical Project Manager at VIZIO, breaks down how enterprise AI adoption takes hold when managers turn uncertainty into structured, low-risk use in real work.

Credit: New Tab News

When a team resists change, it’s rarely about the technology. It’s about the uncertainty and the perceived risk. What works is making the change feel smaller and safer, tied to real work people already do.

AI rollout friction shows up in the day-to-day moments where employees decide what the technology means for their work. As new tools spur workforce changes and redefine the future of work, adoption can trigger anxiety among teams unsure how AI will affect them as employees. Treating psychological safety as a practical part of the rollout plan, rather than an HR afterthought, often determines whether teams actually use the tools or try to cut them out of the organization for fear of losing their position.

Meyyammai Valliyappan navigates that tension every day. As a Technical Project Manager currently driving enterprise transformation at consumer electronics company VIZIO, she runs large-scale, cross-functional programs across global teams. With more than 12 years of experience managing enterprise deployments, she turns executive strategies into daily routines. In that work, she treats emotional readiness as a hard operational requirement.

“When a team resists change, it’s rarely about the technology. It’s about the uncertainty and the perceived risk. What works is making the change feel smaller and safer, tied to real work people already do," she says.

Caught in the Middle

Executive mandates often squeeze the middle tier. Caught between the C-suite's delivery targets and their team's uncertainty, middle managers have to figure out how to integrate the tools while still hitting their deadlines. Accordingly, it's in this space where the impact of AI technologies will be felt the most. 

"Managers are the important sandwich between higher management and employees," Valliyappan says. "AI adoption becomes real at the team level, not at the organization level. HR can handle training and integration, and managers translate AI strategy into actual behavior. They decide where AI gets used, how safe it feels to experiment, and whether it becomes part of the workflow or it's just another initiative."

In many rollouts, some organizations jump straight into tool training before establishing any baseline of safety. Big, abstract mandates often leave employees defensive. Valliyappan has found it more effective to be explicit about what will change and what will not before introducing the software mechanics. The strategy helps employees realize the technology is helping them, not replacing them. But the sequence of events around adoption is critical for Valliyappan.

"First, I acknowledge the concern openly: AI will change how we work," she says. "Then I clarify what it doesn't change: human judgment, context, and decision-making still matter. Once we have clarity, people become curious instead of defensive." That ordering, she argues, is what makes the rest of the program work. "If people feel secure, they will learn faster than any structured program. What's worked for me is making the change feel smaller and safer. I start with specific low-risk use cases tied to their daily work, not big transformation narratives."

Setting Up Guardrails, Not Gates

Turning that emotional security into daily behavior frequently requires HR and managers to work together. When companies focus solely on compliance, a disconnect in the workplace can arise, complicating the state of AI in HR. Managers are left without concrete examples of how to use the tech safely and may worry about teams using tools in ways that create policy risks or the inherent vulnerabilities of unsupervised AI use. To establish effective governance, organizations can equip managers with the functional support they need.

"For managers to do this well, they need clarity from HR," Valliyappan says. "Not just policies, but practical guardrails and examples of what good looks like. So this is where a manager plays an important role in transforming AI strategy into a well-equipped workflow."

Setting up real policies often involves piloting new tech to determine its impact on the business. But business doesn't stop for tech pilots. Teams still have to ship products. Valliyappan's answer is to build experimentation into existing work by setting aside a small share of capacity for structured trials. That lets teams pursue safe and compliant adoption without stepping away from their commitments, aligning with how many modern organizations approach enterprise AI adoption."I usually allocate a small, time-boxed portion of sprint capacity, around 10 to 15%, and focus on areas like documentation, test cases, or analysis where the risk is low but the impact is visible."

The connection between experimentation and outcome, Valliyappan adds, is what shifts perception. "The key is to connect AI usage to actual outcomes like time saved or improvements in quality," she says. "When teams see that, AI stops feeling like overhead and starts feeling like leverage, and the outcomes follow."

What AI-Readiness Actually Looks Like

When the framework takes hold, the behavioral changes on the floor are easy to spot. For Valliyappan, building an AI-ready culture is often less about the number of tools deployed and more about how comfortably people use them. "An AI-ready team isn't the one using the most tools. It's the one that uses AI naturally and confidently," Valliyappan says. "In their workflow, you will see people experimenting without fear, sharing what works, and applying AI in ways that genuinely improve outcomes. There's also a healthy balance. They use AI, but they don't blindly trust it. Critical thinking is still strong." The marker she watches for, she says, is motivation. 

These patterns serve as a reminder that the toughest parts of tech adoption are usually human. The determining factor is less about the software and more about the environment managers create for learning and experimentation. When asked who ultimately owns the process, Valliyappan again points to the operational tier, where managers make ground-level decisions about where adoption happens. CEOs and the like can make determinations about optimizing efficiency, but managers are the ones with their hands on day-to-day activities.

When that middle layer is equipped to guide their teams, clear away the jargon, and foster genuine curiosity, the mechanics of adoption tend to fall into place. "AI adoption is not a technological problem, it's a leadership and trust problem," Valliyappan says. "The teams that succeed are the ones where people feel safe to learn, adapt, and evolve. Then AI adoption becomes a seamless workflow."

Related content

The Promise of AI Comes from Governing Systems that Don't Sit Still

Syeda Iram Fatima Jafry, a digital governance and AI expert, discusses the shifting target of AI governance and recommends that leaders govern outputs as much as inputs, with a focus on ongoing change.

To Justify Cybersecurity Spend Before A Crisis, Leaders Learn The Language Of Invisible ROI

Greg McCord, CISO at Lightcast, explains how cybersecurity leaders should learn the language of ROI and describes how AI and a positive mindset can help translate value to the board.

ASU’s CISO Pushes AI Data Governance Upstream To Procurement

Lester Godsey, CISO at Arizona State University, explains why AI vendor contracts have become the frontline data privacy battle in higher education, and how ASU is winning it.

You might also like

See all →

Operational Leaders Turn AI Anxiety Into Adoption By Designing For Safe Experimentation

The Promise of AI Comes from Governing Systems that Don't Sit Still

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Powered by Island.
ISLAND, All rights reserved ©