• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Tab News Team
September 18, 2025
Industry News

A new EisnerAmper survey reveals 80% of employees use AI with little to no company oversight, posing security risks.

Credit: Outlever

A new survey from EisnerAmper reveals a massive gap between widespread, unsupervised AI use by staff and the near-total lack of formal oversight from employers, creating a ticking time bomb for security and workplace dynamics.

  • Don't ask, don't tell: The report found that an astounding 80% of staff use AI with little to no company supervision. While more than a third of employees are frequent users, only 36% say their company has a formal AI policy, and a mere 22% report their usage is actively monitored. That isn't stopping them: more than a quarter admit they would use the tech even if it were banned.

  • Bring your own breach: This has led to a surge in unsanctioned AI use, with a majority of workers (60%) relying on free, public tools for their jobs. The practice creates a massive security risk, as employees have been known to paste sensitive company data into external platforms.

Despite the risks, employees are bullish on AI, with more than three-quarters saying it makes them more productive. While most claim they use the time saved to get more work done, others admit to taking a walk or a longer lunch. Now, after developing these new skills on the job, nearly three-quarters of them want to get paid for it.

Companies are no longer in control of AI adoption. The workforce is integrating it from the ground up, forcing employers into a reactive position where they must now grapple with security risks, policy vacuums, and new compensation demands.

  • Also on our radar: The EisnerAmper report also revealed a sharp generational divide in AI sentiment, with younger workers being twice as happy using the tech as their older colleagues. Elsewhere, employees seem to welcome AI in the onboarding process but are deeply divided on its use in performance reviews.

Related content

Security Leaders Build Adaptive Governance Frameworks to Contain Shadow AI Risk

Mahesh Varavooru, Founder of Secure AI, warns that Shadow AI creates a hidden two way risk loop and calls for runtime guardrails and sanctioned sandboxes to secure enterprise innovation.

Clear Accountability Structures Reduce Risk, Anchor AI Deployment In Real Decision Workflows

Artur Walisko, Founder and Architect of LLM Studio, argues that the AI deployment gap is an architectural failure, not an adoption problem, and that governance must be built into AI systems as a structural layer before models reach real decisions.

Cyber Resilience Replaces Breach Prevention As The Defining Measure For Enterprise Security

Theresa Lanowitz, cybersecurity evangelist and former Gartner analyst, explains why resilience and supply chain accountability are the priorities security leaders must act on in 2026.

You might also like

See all →

Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage

Powered by Island.
ISLAND, All rights reserved ©