• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Tab News Team
September 18, 2025
Industry News

A new EisnerAmper survey reveals 80% of employees use AI with little to no company oversight, posing security risks.

Credit: Outlever

A new survey from EisnerAmper reveals a massive gap between widespread, unsupervised AI use by staff and the near-total lack of formal oversight from employers, creating a ticking time bomb for security and workplace dynamics.

  • Don't ask, don't tell: The report found that an astounding 80% of staff use AI with little to no company supervision. While more than a third of employees are frequent users, only 36% say their company has a formal AI policy, and a mere 22% report their usage is actively monitored. That isn't stopping them: more than a quarter admit they would use the tech even if it were banned.

  • Bring your own breach: This has led to a surge in unsanctioned AI use, with a majority of workers (60%) relying on free, public tools for their jobs. The practice creates a massive security risk, as employees have been known to paste sensitive company data into external platforms.

Despite the risks, employees are bullish on AI, with more than three-quarters saying it makes them more productive. While most claim they use the time saved to get more work done, others admit to taking a walk or a longer lunch. Now, after developing these new skills on the job, nearly three-quarters of them want to get paid for it.

Companies are no longer in control of AI adoption. The workforce is integrating it from the ground up, forcing employers into a reactive position where they must now grapple with security risks, policy vacuums, and new compensation demands.

  • Also on our radar: The EisnerAmper report also revealed a sharp generational divide in AI sentiment, with younger workers being twice as happy using the tech as their older colleagues. Elsewhere, employees seem to welcome AI in the onboarding process but are deeply divided on its use in performance reviews.

Related content

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Omer Grossman, former Chief Trust Officer and Head of the CYBR Unit at CyberArk, explains why nearly every enterprise claims to use AI but almost none have transformed the way their organizations actually operate.

Shadow AI and Departmental Silos Force Enterprises to Rethink Resilience

Nethusha Ravisuthan, Sales Support and Operations Manager at Microsoft, argues that Shadow AI, departmental silos, and ungoverned AI agents are compounding enterprise risk, and that operational trust and holistic system resilience must become foundational to AI deployment.

How Higher Education Puts Boundaries Around AI Agents With Sanctioned Access Models

Vijay Samtani, CISO at Cambridge University, discusses how blocking AI agents is a losing battle for security leaders. Their best course of action is to build clear rules and guidelines for AI access to control vulnerable surfaces.

You might also like

See all →

Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage

Powered by Island.
ISLAND, All rights reserved ©