• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

As the AI arms race drives rushed adoption, great CS at enterprise scale still requires human nuance

New Tab News Team
September 18, 2025
AI

ServiceNow's Manager of Security Analytics & AI, David Rider, advises against rushing into AI without clear needs and well-scoped rollouts.

Credit: Outlever

The sentiment that you have to be using AI right now is slightly misguided. I don't think all companies need to be using AI immediately.

AI is remaking customer service in the enterprise, with adoption rates and expectations soaring. While the benefits are clear—greater efficiency, lower costs, and improved customer satisfaction—industry leaders warn against under-thinking adoption. The real test lies in balancing innovation with strategy, security, and stability, and companies must make AI serve their unique needs, rather than succumbing to hype.

Against this backdrop of change, David Rider, Manager of Security Analytics & AI at ServiceNow, offers his perspective. While statistics show 86% of customer service professionals have tested or implemented AI, and projections suggest AI could handle 95% of all customer interactions, Rider challenges the idea of an all-out AI arms race. "The sentiment that you have to be using AI right now is slightly misguided," Rider states. "I don't think all companies need to be using AI immediately."

  • Thoughtful adoption needed: Rider makes it clear that a blanket adoption of AI isn't a universal solution, despite compelling benefits like cutting resolution times. A rushed approach can create major headaches, as not all implementations yield expected results, and companies may wrestle with integration and data quality, potentially under-calculating the resources required.

  • The human element: To show when AI might be a step back, Rider points to Chewy, a company known for its deeply personal human connections. "I think Chewy is a prime example of company that doesn't really need to do that," Rider explains. "Pretty much anything Chewy could do now that involved AI or technology is going to be looked on as a step down from what they're doing already." For businesses whose brand means high-touch, empathetic service, AI might not be a savior, but a setback.

The dangers of poorly implemented AI are real, Rider warns. "The worst thing you can have is AI that doesn't work," he says. "I'd rather have a human who's annoying and hard to work with than an AI because with the human, I can just say, 'Hey look, can you pass me to your supervisor?' But the AI is like, 'Well I don't even have a supervisor. You're stuck with me.'" His point mirrors common customer frustrations when AI can't grasp complex requests or offer a simple way to reach a human.

  • Smart plays for AI: Rider champions using AI where it truly adds value—in complex, nuanced situations, not simple, rule-based tasks, a distinction he believes some executives overlook. He highlights case summarization as a game changer because "every minute saved in customer service is crucial." This view is backed by research showing AI’s power to deliver hyper-personalization and boost efficiency.

  • Fairness in focus: Rider also questions if companies will use AI equitably, or mainly for profit. He worries businesses might streamline sign-ups but make cancellations difficult, saying, "Imagine if, when you're signing up for a gym membership, it's now super easy. But then they keep the old antiquated systems for when you want to cancel." Recent consumer complaints and regulatory moves, like the FTC's October 2024 Negative Option Rule update against such "dark patterns," prove the concern is well-founded.

In the end, Rider advises a deliberate path. "I think companies that are embedded in technology should start looking into the right situations to use AI," he suggests. "But I don't think there needs to be as big of a rush as people probably think. You can't just apply AI everywhere in your business if you don't first have a solid use case."

Related content

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Omer Grossman, former Chief Trust Officer and Head of the CYBR Unit at CyberArk, explains why nearly every enterprise claims to use AI but almost none have transformed the way their organizations actually operate.

Shadow AI and Departmental Silos Force Enterprises to Rethink Resilience

Nethusha Ravisuthan, Sales Support and Operations Manager at Microsoft, argues that Shadow AI, departmental silos, and ungoverned AI agents are compounding enterprise risk, and that operational trust and holistic system resilience must become foundational to AI deployment.

How Higher Education Puts Boundaries Around AI Agents With Sanctioned Access Models

Vijay Samtani, CISO at Cambridge University, discusses how blocking AI agents is a losing battle for security leaders. Their best course of action is to build clear rules and guidelines for AI access to control vulnerable surfaces.

You might also like

See all →

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Shadow AI and Departmental Silos Force Enterprises to Rethink Resilience

How Higher Education Puts Boundaries Around AI Agents With Sanctioned Access Models

Powered by Island.
ISLAND, All rights reserved ©