Your AI training programme covers UK compliance. Or it covers EU compliance. But does it cover both? Since Brexit, the UK and EU have diverged sharply on AI regulation. The EU's AI Act imposes prescriptive, risk-based obligations. The UK's approach relies on principles and sector-specific regulators. For the estimated 45,000 UK companies that trade with or operate in the EU, this regulatory fork creates a dual-compliance challenge that most AI training programmes completely ignore.
By Toni Dos Santos, Co-Founder, Spicy Advisory
Two Paths, One Workforce
Let's map the divergence clearly. The UK and EU started from the same regulatory foundation — the GDPR — but have moved in fundamentally different directions on AI.
The EU AI Act: Prescriptive and Risk-Based
The EU AI Act, which entered into force in August 2024 with phased implementation through 2026, is the world's first comprehensive AI regulation. It classifies AI systems into four risk categories:
- Unacceptable risk: Banned outright — social scoring, real-time biometric identification in public spaces (with limited exceptions), manipulative AI targeting vulnerabilities
- High risk: Subject to strict obligations — conformity assessments, technical documentation, human oversight, accuracy and robustness testing. Includes AI used in recruitment, credit scoring, education, law enforcement, and critical infrastructure
- Limited risk: Transparency obligations — users must be informed they are interacting with AI (chatbots, deepfakes, emotion recognition)
- Minimal risk: No specific obligations — spam filters, AI-powered video games, most internal business tools
The penalties are severe: up to €35 million or 7% of global annual turnover for the most serious violations.
The UK Approach: Principles-Based and Sector-Specific
The UK government's Pro-Innovation Approach to AI Regulation, first published in March 2023 and updated through 2025, takes a deliberately different path. Instead of a single horizontal regulation, the UK establishes five cross-cutting principles — safety, transparency, fairness, accountability, and contestability — and delegates enforcement to existing sector-specific regulators.
This means:
- The ICO regulates AI involving personal data (under UK GDPR)
- The FCA regulates AI in financial services
- The CQC regulates AI in healthcare
- The Ofcom regulates AI in communications and broadcasting
- The CMA monitors AI's impact on competition
There is no formal risk classification system, no mandatory conformity assessment, and no centralised AI registry. The UK approach gives companies more flexibility but also less certainty about what compliance looks like.
Why This Divergence Matters for Training
Here's the problem: 73% of UK companies with EU operations have no dual-compliance AI training programme in place, according to a 2025 survey by the CBI. Most either train to UK standards only (assuming EU compliance will sort itself out) or train to EU standards only (over-engineering for domestic use). Neither approach is adequate.
The Compliance Training Gap: What Most UK Companies Are Missing
The most dangerous gap in UK AI training is the EU AI Act's extraterritorial scope. Article 2 of the EU AI Act applies to:
- Providers of AI systems that are placed on the market or put into service in the EU — regardless of where the provider is established
- Deployers of AI systems who are located in the EU
- Providers and deployers located outside the EU where the output produced by the AI system is used in the EU
This means a UK-headquartered company that sells an AI-powered product to EU customers, or whose AI system produces outputs consumed by EU-based users, must comply with the EU AI Act — even though the company is no longer in the EU.
The practical implications for training are significant:
| Dimension | UK Approach | EU AI Act |
|---|---|---|
| Regulatory model | Principles-based, sector-specific | Prescriptive, horizontal regulation |
| Risk classification | No formal system | Four-tier: unacceptable, high, limited, minimal |
| Conformity assessment | Not required | Mandatory for high-risk systems |
| AI registry | None | EU database for high-risk systems |
| Enforcement body | Existing regulators (ICO, FCA, etc.) | National authorities + EU AI Office |
| Maximum penalties | Varies by regulator (e.g., ICO: £17.5M/4%) | €35M or 7% of global turnover |
The Spicy Dual-Compliance AI Training Matrix
We've developed a role-based training matrix that maps exactly who needs to know what about each regulatory regime. The principle is simple: not everyone needs to be an expert in both frameworks, but specific roles need specific knowledge of specific obligations.
Tier 1: All Staff — AI Regulatory Awareness (2 Hours)
Every employee who uses or is affected by AI needs a baseline understanding of both frameworks. This covers:
- Why the UK and EU have different approaches and what that means in practice
- Which AI systems in the organisation fall under UK regulation, EU regulation, or both
- The employee's personal obligations — what they can and can't do with AI tools
- How to identify potential compliance issues and who to escalate to
Tier 2: Legal, Compliance, and DPO Teams — Deep Regulatory Dive (1 Day)
These teams need comprehensive knowledge of both frameworks:
- EU AI Act risk classification methodology — how to assess which category a system falls into
- UK regulatory landscape — which regulator applies to which AI use case
- Interaction between AI regulation and data protection (UK GDPR vs EU GDPR)
- Cross-border data flows and adequacy decisions post-Brexit
- Enforcement trends — ICO vs CNIL approaches, early EU AI Act enforcement signals
Tier 3: Product, Engineering, and Data Teams — Technical Compliance (1 Day)
Teams building or deploying AI systems need practical compliance skills:
- EU AI Act conformity assessment process — what documentation is required and how to prepare it
- Technical documentation requirements — model cards, data sheets, risk assessments
- Bias testing and accuracy validation under both UK and EU standards
- Human oversight implementation — when and how to build in human review
- Incident reporting — what constitutes a reportable AI incident under each framework
Tier 4: Procurement Teams — Vendor Compliance (Half Day)
Procurement teams are the gatekeepers for AI tools entering the organisation:
- How to evaluate AI vendor compliance with both UK and EU requirements
- Key contractual clauses for AI procurement — liability, data processing, transparency, audit rights
- Data residency requirements and cross-border transfer mechanisms
- How to assess whether a vendor's AI system qualifies as high-risk under the EU AI Act
Tier 5: C-Suite and Board — Strategic Regulatory Briefing (Half Day)
Senior leaders need to understand the strategic implications:
- Liability framework — who is personally accountable for AI compliance failures under each regime
- Strategic positioning — how regulatory compliance can become a competitive advantage with EU clients
- Investment implications — what dual compliance means for AI budgets and timelines
- Board governance — how to structure AI oversight for dual-market operations
Implementation: The 90-Day Sprint to Dual Compliance
Moving from single-market to dual-market AI compliance readiness doesn't require a year-long programme. Here's our recommended 90-day sprint:
- Weeks 1-3: Audit. Map every AI system in the organisation against both UK and EU requirements. Identify which systems have extraterritorial EU AI Act obligations. Assess current staff competency against the dual-compliance matrix
- Weeks 4-6: Design. Customise the five-tier training programme to your organisation's specific AI landscape, regulatory exposure, and workforce structure
- Weeks 7-10: Deliver. Roll out Tier 1 (all staff) and Tier 5 (C-suite) simultaneously. These create the foundation and executive sponsorship. Follow with Tiers 2-4 for targeted populations
- Weeks 11-12: Embed. Integrate dual-compliance checks into existing AI governance processes. Set up ongoing monitoring and refresh cycles (quarterly for legal/compliance, annually for all-staff)
Spicy Advisory's cross-border positioning — headquartered in Paris, serving clients across the UK and EU — gives us unique insight into how both regulatory regimes operate in practice. We've seen how French companies navigate the EU AI Act and how UK companies adapt their governance frameworks. This dual-market experience informs every aspect of our training programmes.
Operating across the UK and EU? Need your teams trained on both regulatory frameworks? Spicy Advisory is uniquely positioned to deliver dual-compliance AI training — we're based in Paris with deep expertise in both the EU AI Act and UK regulatory landscape. Our Dual-Compliance AI Training Matrix is tailored to your organisation's specific cross-border exposure. Book a discovery call.
Frequently Asked Questions
Does the EU AI Act apply to UK companies?
Yes, in many cases. The EU AI Act has extraterritorial scope under Article 2. It applies to UK companies that place AI systems on the EU market, deploy AI systems within the EU, or produce AI outputs that are used within the EU. For example, a UK fintech whose credit-scoring AI is used by EU-based customers must comply with the EU AI Act's requirements for high-risk AI systems — including conformity assessments, technical documentation, and human oversight requirements — even though the company is headquartered in the UK. Companies that have no EU customers, operations, or outputs used in the EU are not affected.
What is the UK equivalent of the EU AI Act?
The UK does not have a direct equivalent of the EU AI Act. Instead of a single comprehensive AI regulation, the UK follows a principles-based approach where existing sector-specific regulators apply five cross-cutting principles (safety, transparency, fairness, accountability, contestability) to AI within their domains. The ICO handles AI and personal data, the FCA covers financial services AI, and so on. The UK government has signalled that more formal AI legislation may come in 2026-2027, but for now, organisations must navigate a patchwork of existing regulations applied to AI contexts. This gives companies more flexibility but requires them to understand which regulator applies to each AI use case.
How do ICO and CNIL enforcement approaches differ?
The ICO (UK) and CNIL (France, and a leading EU enforcer) have notably different enforcement styles. The ICO tends to be more pragmatic and engagement-focused — it often issues guidance and recommendations before formal enforcement, and its fines have historically been lower than those of some EU counterparts. The CNIL is more aggressive and precedent-setting — it has issued some of the largest GDPR fines in Europe and has been proactive on AI-specific enforcement. For AI compliance, the CNIL has published detailed recommendations on AI and personal data that go beyond ICO guidance in specificity. In practice, organisations should design their compliance frameworks to meet the stricter standard (typically the EU/CNIL approach) and then adjust for UK-specific requirements.
What training do procurement teams need for AI compliance?
Procurement teams are critical but often overlooked in AI compliance training. They need practical skills in four areas: evaluating vendor AI compliance documentation (does the vendor have conformity assessments, technical documentation, and risk assessments for high-risk systems?); understanding key contractual clauses for AI procurement (liability allocation, data processing agreements, transparency commitments, audit rights, model training opt-outs); assessing data residency and cross-border transfer mechanisms (where is data processed, what transfer mechanisms are in place?); and determining whether a vendor's AI system qualifies as high-risk under the EU AI Act (which triggers additional procurement due diligence). A half-day focused workshop with practical exercises — such as reviewing real AI vendor contracts — is typically sufficient.