The ICO isn't waiting for Parliament to pass an AI Act. Through a steady stream of guidance, audits, and enforcement actions, the Information Commissioner's Office is already shaping what AI governance looks like for UK companies. The question is whether your organisation is keeping pace — or whether you're building AI capabilities on a compliance foundation that could crack at any moment.
By Toni Dos Santos, Co-Founder, Spicy Advisory
The UK's Regulatory Landscape: A Different Path from Brussels
Let's start with what makes the UK position distinctive. While the EU passed the AI Act — a comprehensive, prescriptive regulation that classifies AI systems by risk level and imposes strict requirements — the UK government has deliberately chosen a different route.
The UK's Pro-Innovation Approach to AI Regulation, published in the March 2023 white paper and reinforced in subsequent policy updates, establishes five cross-cutting principles: safety, transparency, fairness, accountability, and contestability. But rather than creating a single horizontal AI regulator, the UK delegates enforcement to existing sector-specific regulators — the FCA for financial services, the CQC for healthcare, Ofcom for communications, and critically, the ICO for anything involving personal data.
This matters because virtually every enterprise AI deployment touches personal data. Whether you're using AI for customer service, HR decisions, marketing personalisation, or internal analytics, the ICO is almost certainly your primary AI regulator. According to DSIT's 2025 AI Activity in UK Business survey, 68% of large businesses have adopted at least one AI technology — and the vast majority of those deployments process personal data in some form.
The practical implication: you don't need to wait for a UK AI Act to know what's expected. The ICO has been publishing AI-specific guidance since 2020, and enforcement is already happening.
What the ICO Actually Expects
The ICO's AI guidance isn't vague. It maps directly to existing data protection law — the UK GDPR and the Data Protection Act 2018 — applied to AI contexts. Here are the four pillars of what the ICO expects from organisations deploying AI.
1. Lawful Basis for AI Processing
Every AI system that processes personal data needs a lawful basis under Article 6 of the UK GDPR. This sounds straightforward, but AI complicates it. Training a model on customer data requires a lawful basis. Using that model to make inferences about individuals requires a lawful basis. Storing the outputs requires a lawful basis. Each stage of the AI pipeline may need separate legal justification.
The ICO has been particularly clear that legitimate interests — the most commonly cited basis for AI processing — requires a genuine balancing test, not a tick-box exercise. In 2024-25, the ICO issued enforcement notices to three organisations whose legitimate interest assessments for AI processing were deemed insufficient. The message: if your LIA for AI is a copy-paste template, you're exposed.
2. Transparency and Explainability
The ICO's guidance on AI transparency goes beyond simply telling people their data is being processed. It requires organisations to explain how AI systems make decisions that affect individuals — in terms those individuals can understand. This is especially critical for automated decision-making under Article 22 of the UK GDPR, which gives individuals the right not to be subject to solely automated decisions with significant effects.
The ICO's 2025 transparency audit programme found that 72% of organisations using AI for customer-facing decisions could not adequately explain how those decisions were reached. This isn't just a compliance gap — it's a trust gap that directly impacts customer relationships.
3. Data Protection Impact Assessments for AI
DPIAs are mandatory for processing that is likely to result in high risk to individuals — and the ICO's position is clear that most AI deployments meet this threshold. The ICO expects DPIAs for AI to go beyond standard data protection assessments and address AI-specific risks: bias, accuracy, drift, and the cascading effects of automated decisions.
Yet according to the UK Government's AI Regulation survey, only 35% of UK organisations conducting AI projects complete a DPIA before deployment. That means nearly two-thirds are either unaware of the requirement or actively choosing to skip it — a risky bet given the ICO's increasing audit activity.
4. Fairness and Bias
The ICO has made fairness in AI a priority enforcement area. AI systems that produce discriminatory outcomes — even unintentionally — can breach data protection law. The ICO expects organisations to test for bias before deployment, monitor for bias during operation, and have processes for remediation when bias is detected.
This intersects with the Equality Act 2010 and creates a dual compliance obligation. An AI recruitment tool that systematically disadvantages candidates from certain demographic groups isn't just a data protection problem — it's a discrimination lawsuit waiting to happen.
UK vs EU: What the Differences Mean in Practice
Understanding the contrast between the UK and EU approaches is essential for any organisation operating across both jurisdictions — and for UK companies watching the EU AI Act to gauge where domestic regulation might head.
| Dimension | UK Approach | EU AI Act |
|---|---|---|
| Regulatory model | Principles-based, sector-specific regulators | Prescriptive, horizontal regulation with central oversight |
| Risk classification | No formal classification system (yet) | Four-tier risk classification (unacceptable, high, limited, minimal) |
| Enforcement | Existing regulators (ICO, FCA, etc.) | National authorities + EU AI Office |
| Compliance obligations | Mapped to existing law (UK GDPR, Equality Act) | New, AI-specific obligations including conformity assessments |
| Innovation posture | Explicitly pro-innovation, regulatory sandboxes | Safety-first, precautionary principle |
The UK approach gives companies more flexibility — but also less certainty. Without a formal risk classification system, organisations must make their own judgements about what level of governance each AI deployment requires. That's liberating for mature organisations and dangerous for those without governance capability.
The Spicy AI Governance Stack: A Four-Layer Framework
After working with dozens of UK organisations on AI governance, we've developed a practical framework that meets ICO expectations without creating the kind of bureaucratic overhead that kills AI innovation. We call it The Spicy AI Governance Stack — four layers that work together to create governance that enables rather than blocks.
Layer 1: Policy — The Rules of the Road
Every organisation deploying AI needs a clear AI usage policy. This is your foundational document — it sets boundaries, defines acceptable use, and gives employees clarity on what they can and can't do with AI tools.
Your AI usage policy should cover:
- Approved AI tools: Which tools are sanctioned for use, which are prohibited, and the process for requesting new tools
- Data classification rules: What categories of data can be input into AI systems (public, internal, confidential, personal data) and under what conditions
- Output governance: Requirements for human review before AI outputs are used in decisions, communications, or published materials
- Prohibited uses: Clear red lines — e.g., no AI for automated hiring decisions without human oversight, no personal data in public AI tools without authorisation
- Incident reporting: How to report AI errors, unexpected outputs, or potential bias
- Version control: Policy review frequency (we recommend quarterly given the pace of AI development)
The policy needs to be specific enough to be actionable but flexible enough to accommodate new tools and use cases. A 50-page policy document that nobody reads is worse than a well-crafted 5-page document that every employee understands.
Layer 2: Process — DPIA, Risk Assessment, and Ongoing Monitoring
Process is where governance becomes operational. This layer covers the workflows that ensure AI deployments are assessed, approved, monitored, and retired responsibly.
AI-Specific DPIA Checklist:
- Data inputs: What personal data does the AI system process? What is the lawful basis for each category?
- Training data: Was personal data used to train or fine-tune the model? If so, what consent or legal basis applies?
- Decision scope: Does the system make or inform decisions about individuals? Are those decisions solely automated?
- Bias testing: Has the system been tested for discriminatory outcomes across protected characteristics?
- Accuracy validation: What is the system's error rate? What are the consequences of errors for affected individuals?
- Transparency measures: Can you explain to affected individuals how the system works and how decisions are reached?
- Data minimisation: Is the system processing only the data necessary for its purpose?
- Retention and deletion: How long are AI inputs, outputs, and logs retained? Is automated deletion in place?
- Third-party risks: If using a third-party AI service, what data processing agreements are in place? Where is data processed geographically?
- Human oversight: What human review mechanisms exist? Who has authority to override AI decisions?
This checklist should be completed before any AI system goes live and reviewed annually — or whenever the system is significantly updated.
Layer 3: People — Training, Roles, and Accountability
Governance frameworks fail when nobody owns them. This layer defines who is responsible for what and ensures they have the skills to fulfil those responsibilities.
Key roles to define:
- AI Governance Lead: Typically sits within legal, compliance, or the DPO's office. Owns the governance framework, coordinates DPIAs, and reports to the board
- Departmental AI Champions: One per major department. Acts as the first point of contact for AI governance questions within their team. Completes AI-specific training annually
- Data Protection Officer: Already a statutory role for many organisations. The DPO's remit explicitly includes AI governance under the ICO's guidance
- Board-level oversight: At least one board member or committee should have explicit responsibility for AI risk. The ICO's 2025 guidance specifically recommends board-level AI oversight
Training is not optional. DSIT's 2025 survey found that only 22% of UK businesses have provided AI-specific governance training to staff involved in AI deployment. That's a governance gap that no policy document can fill. People need to understand not just what the rules are, but why they exist and how to apply them to novel situations.
Layer 4: Platform — Tool Selection and Data Controls
The final layer addresses the technology itself. Governance isn't just about human processes — it's about ensuring your AI tools and infrastructure support compliant use.
Platform governance covers:
- Vendor assessment: Evaluate AI vendors against data protection, security, and transparency criteria before procurement. Where is data processed? What are the vendor's data retention policies? Can you audit their systems?
- Data loss prevention: Technical controls that prevent sensitive data from being input into unapproved AI tools. This includes browser extensions, API gateways, and network-level controls
- Audit logging: All AI interactions involving personal data should be logged for compliance and investigation purposes
- Access controls: Role-based access to AI tools, ensuring that only authorised personnel can use AI for specific purposes
- Shadow AI detection: Monitoring for unauthorised AI tool usage across the organisation. Our guide to managing shadow AI covers this in depth
Implementation: Making Governance Work Without Killing Innovation
The fear I hear most often from CTOs and innovation leaders is that governance will slow everything down. It's a legitimate concern — poorly designed governance does exactly that. But well-designed governance actually accelerates AI adoption by removing uncertainty.
Here's the approach that works:
Tiered governance based on risk. Not every AI use case needs the same level of scrutiny. A marketing team using AI to draft social media posts needs lighter governance than an HR team using AI to screen CVs. Create three tiers — low, medium, and high risk — with proportionate requirements for each. Low-risk deployments might need only a brief risk assessment and manager approval. High-risk deployments get the full DPIA, legal review, and board sign-off.
Pre-approved use cases. For common, low-risk AI applications, create a catalogue of pre-approved use cases with built-in guardrails. If a team wants to use ChatGPT Enterprise for meeting summarisation using only internal data, that's a pre-approved use case. No DPIA needed, no legal review — just follow the standard operating procedure. This eliminates 60-70% of governance friction.
Fast-track DPIA process. For medium-risk deployments, create a streamlined DPIA template that can be completed in 2-3 hours, not 2-3 weeks. The ICO doesn't mandate a specific DPIA format — it mandates that you assess risk adequately. A focused, well-designed template achieves compliance faster than a bloated enterprise risk assessment form.
Regular governance reviews. AI governance isn't a one-time project. Schedule quarterly reviews of your governance framework, AI tool inventory, and risk assessments. AI tools evolve rapidly — a model update from your vendor could change the risk profile of a deployment overnight.
The Enforcement Reality: What Happens If You Get It Wrong
The ICO has real teeth. Under the UK GDPR, maximum fines are GBP 17.5 million or 4% of global annual turnover — whichever is higher. While the ICO has historically been more measured than some EU data protection authorities, enforcement activity is increasing.
In the AI context specifically, the ICO has:
- Issued enforcement notices for inadequate legitimate interest assessments for AI processing
- Conducted audits of AI-driven decision-making systems in financial services and recruitment
- Published guidance that explicitly warns organisations against treating AI governance as optional
- Established a dedicated AI and technology team to support investigations and audits
Beyond ICO enforcement, there's reputational risk. A 2025 Edelman Trust Barometer survey found that 64% of UK consumers would stop using a service if they learned their data was being processed by AI without adequate transparency. Governance failures don't just attract fines — they destroy customer trust.
What's Coming Next: The UK AI Regulation Timeline
The UK government has signalled that binding AI regulation is coming, even if the timeline remains fluid. Here's what to watch:
- 2025-2026: Continued reliance on existing regulators and non-statutory guidance. The ICO, FCA, and other regulators continue to publish sector-specific AI guidance. Regulatory sandboxes are expanded
- 2026-2027: Expected introduction of AI-specific legislative proposals, potentially including mandatory AI transparency requirements and an AI incident reporting framework
- Beyond 2027: Possible establishment of a central AI standards body to coordinate across sector regulators
The smart move is to build governance now that can absorb future regulation without a costly overhaul. The Spicy AI Governance Stack is designed precisely for this — the four-layer structure means you can tighten specific layers as new requirements emerge without rebuilding from scratch.
"The organisations that will thrive under future AI regulation are the ones building governance today — not because they have to, but because it makes their AI adoption faster, safer, and more trustworthy. Governance done right is a competitive advantage, not a compliance burden." — Toni Dos Santos, Co-Founder, Spicy Advisory
Need help building an AI governance framework that satisfies the ICO without slowing your teams down? Spicy Advisory's AI governance training programme equips compliance officers, DPOs, and leadership teams with the practical frameworks, templates, and skills to govern AI effectively. Book a discovery call.
Frequently Asked Questions
Does the UK have an AI Act?
No, the UK does not currently have a dedicated AI Act comparable to the EU's AI Act. Instead, the UK follows a pro-innovation, principles-based approach where existing sector-specific regulators — the ICO for data protection, the FCA for financial services, the CQC for healthcare — apply their existing powers to AI within their domains. The UK government has set out five cross-cutting principles (safety, transparency, fairness, accountability, contestability) but has not yet enacted AI-specific legislation. However, binding AI regulation is expected to be introduced in 2026-2027. Organisations should build governance frameworks now to be ready.
What does the ICO require for AI governance?
The ICO requires organisations using AI that processes personal data to comply with the UK GDPR and Data Protection Act 2018. In practice, this means establishing a lawful basis for each stage of AI processing, conducting Data Protection Impact Assessments for high-risk AI deployments (which includes most AI systems affecting individuals), ensuring transparency and explainability of AI decisions, testing and monitoring for bias and fairness, maintaining appropriate human oversight of automated decisions, and implementing data minimisation and retention controls. The ICO has published detailed guidance on AI and data protection, and has a dedicated technology team conducting audits and investigations.
Do I need a DPIA for using AI tools?
In most cases, yes. The ICO's position is that AI processing involving personal data is likely to meet the threshold for mandatory DPIAs — processing that is likely to result in a high risk to individuals' rights and freedoms. This includes AI used for profiling, automated decision-making, large-scale processing of personal data, or systematic monitoring. Even AI tools used for internal purposes (such as HR analytics or employee performance monitoring) typically require a DPIA. Only AI use cases that involve no personal data at all — for example, using AI to analyse purely anonymised market data — may not require a DPIA. When in doubt, the ICO recommends completing a DPIA as a matter of good practice.
How is UK AI regulation different from the EU AI Act?
The key differences are structural and philosophical. The EU AI Act is a single, comprehensive regulation that classifies AI systems into four risk categories (unacceptable, high, limited, minimal) with prescriptive requirements for each — including mandatory conformity assessments, registration in an EU database, and detailed technical documentation. The UK approach is principles-based rather than prescriptive, relies on existing sector-specific regulators rather than creating a new AI authority, does not impose a formal risk classification system, and explicitly prioritises innovation alongside safety. For organisations operating in both jurisdictions, this means maintaining two compliance frameworks — the EU's detailed, rule-based requirements and the UK's more flexible but less predictable principles-based approach.