• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

UK's ICO cracks down on AI and Biometrics with fresh oversight playbook

New Tab News Team
September 18, 2025
Industry News

The UK's ICO introduces a new strategy to regulate AI and biometric technologies, aiming to enhance data protection and public trust.

Credit: Outlever

The UK's Information Commissioner’s Office (ICO) has rolled out a fresh strategy to tighten the reins on artificial intelligence and biometric technologies. The move, announced June 5th, is designed to let innovation continue, but with stronger protections for personal data and public confidence.

  • Raising the guardrails: The ICO is clear that public trust hinges on responsible tech use, not the tech itself. The new plan includes a statutory code of practice for AI development and deployment, setting more formal rules around how personal data is used, especially for training generative AI models.

  • Spotlight on surveillance: Police deployment of facial recognition technology is a key focus, with the ICO promising audits and guidance to uphold its lawful and proportionate use, a response to public concerns highlighted by its research showing over half worry about privacy infringements. Automated decision-making in areas like recruitment and public services, including systems at the Department for Work and Pensions, will also face increased scrutiny.

Beyond current applications, the ICO is gearing up for emerging risks like increasingly autonomous "agentic AI" and emotion inference technologies. Information Commissioner John Edwards emphasized that understanding AI's impact and addressing potential harms, like misidentification or biased job application outcomes, is paramount. The ICO's strategy signals a more hands-on approach to AI and biometrics, aiming to foster responsible innovation by embedding trust and accountability into the UK's tech ecosystem.

  • Elsewhere in regulation: The UK's move is part of a broader global push, with the EU's comprehensive AI Act setting a risk-based precedent and the US debating federal biometric privacy laws amidst its own tech ambitions. Industry bodies like the Biometrics Institute are also updating guidelines, all pointing to a future where tech innovation and robust ethical oversight must go hand-in-hand.

Related content

An Insider's Guide to Rewiring Orgs as Agents Move From Tools to Core Operators

Omer Grossman, former Chief Trust Officer and Head of the CYBR Unit at CyberArk, explains why nearly every enterprise claims to use AI but almost none have transformed the way their organizations actually operate.

Shadow AI and Departmental Silos Force Enterprises to Rethink Resilience

Nethusha Ravisuthan, Sales Support and Operations Manager at Microsoft, argues that Shadow AI, departmental silos, and ungoverned AI agents are compounding enterprise risk, and that operational trust and holistic system resilience must become foundational to AI deployment.

How Higher Education Puts Boundaries Around AI Agents With Sanctioned Access Models

Vijay Samtani, CISO at Cambridge University, discusses how blocking AI agents is a losing battle for security leaders. Their best course of action is to build clear rules and guidelines for AI access to control vulnerable surfaces.

You might also like

See all →

Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage

Powered by Island.
ISLAND, All rights reserved ©