UK financial services firms are deploying AI faster than they're training their people to use it responsibly. With 68% of UK financial institutions now running at least one AI system in production — from credit scoring to customer service chatbots to portfolio analytics — the FCA and PRA have made it clear that competency isn't optional. Yet only 23% of these firms have structured AI training programmes. That's not a skills gap — it's a regulatory time bomb.

By Toni Dos Santos, Co-Founder, Spicy Advisory

The Regulatory Tightrope: What the FCA and PRA Actually Expect

Let's be specific about what UK financial regulators expect when it comes to AI. The regulatory framework isn't hypothetical — it's already in force and being actively enforced.

SS1/23: Model Risk Management

The PRA's Supervisory Statement SS1/23 on model risk management is the single most important document for any UK financial services firm deploying AI. Published in May 2023 and now fully in force, it requires firms to have robust model risk management frameworks that cover the entire model lifecycle — from development and validation through to deployment and retirement.

For AI and machine learning models specifically, SS1/23 demands that firms can explain how their models work, test for bias and accuracy, and maintain ongoing monitoring. The critical point: the PRA considers inadequate staff competency a model risk management failure. If the people using, building, or overseeing AI models don't understand them, the firm is non-compliant — regardless of how sophisticated the technology is.

Consumer Duty and AI

The FCA's Consumer Duty, which came into full effect in July 2023, has profound implications for AI use in financial services. Any AI system that influences customer outcomes — pricing, product recommendations, claims decisions, affordability assessments — must demonstrably deliver good outcomes for consumers. The FCA has explicitly stated that firms cannot use algorithmic complexity as an excuse for poor customer outcomes.

In practice, this means frontline staff using AI-powered tools need to understand when the tool might produce unfair or inaccurate results, and they need the confidence to override or escalate. A 2025 FCA thematic review found that 42% of firms using AI in customer-facing processes could not demonstrate that staff understood the limitations of the tools they were using.

SM&CR: Personal Accountability for AI

The Senior Managers and Certification Regime creates personal accountability for AI governance at the most senior level. Senior Managers with responsibility for AI systems can face personal regulatory consequences — including fines and prohibition orders — if AI failures occur on their watch due to inadequate governance. This isn't theoretical: the FCA has already used SM&CR to hold individuals accountable for technology failures.

The implication is clear: AI training isn't just an L&D initiative — it's a regulatory obligation that runs from the trading floor to the boardroom.

What "AI Literacy" Actually Means in Financial Services

Generic AI training — the kind where everyone watches the same two-hour webinar about "what is machine learning" — doesn't satisfy regulatory expectations and doesn't change behaviour. Financial services AI training needs to be role-specific, regulation-aware, and practical.

Front Office: Client-Facing Teams

Relationship managers, advisors, and client service teams need to understand:

Risk and Compliance Teams

These teams form the second line of defence and need deeper technical understanding:

Operations Teams

Back-office and operations teams are often the heaviest AI users but the least trained:

Board and C-Suite

Board members and senior executives don't need to understand gradient descent, but they do need to:

The Spicy Financial Services AI Training Framework

After delivering AI training programmes to financial services firms ranging from FTSE 100 banks to specialist insurers and fintech scale-ups, we've developed a four-tier framework that maps directly to regulatory expectations while remaining practical and engaging.

TierAudienceDurationFocusOutcome
Tier 1: AI AwarenessAll staff2 hoursWhat AI is, how it's used in the firm, risks, Consumer Duty basicsEvery employee understands what AI does in their firm and their responsibilities
Tier 2: Functional LiteracyBusiness users1 dayHands-on tool proficiency, prompt engineering, output verification, escalation protocolsUsers can work effectively with AI tools and know their limitations
Tier 3: Technical CompetencyData, quant, model teams2 daysModel governance, validation, bias testing, SS1/23 compliance, monitoringTechnical teams can build, validate, and monitor AI models to regulatory standard
Tier 4: Governance LeadershipBoard, C-suite, senior managersHalf dayStrategic AI oversight, SM&CR obligations, regulatory reporting, risk appetiteLeaders can govern AI effectively and satisfy regulatory scrutiny

Why Four Tiers Matter

The tiered approach isn't just organisational convenience — it maps directly to regulatory expectations. SS1/23 requires firms to demonstrate that individuals involved in model development, validation, and use have appropriate competency for their role. A one-size-fits-all programme can't demonstrate role-appropriate competency. Four tiers can.

Each tier includes assessment — not just attendance tracking. Tier 1 uses scenario-based quizzes. Tier 2 includes practical exercises with real AI tools. Tier 3 involves hands-on model validation workshops. Tier 4 uses board simulation exercises. This gives firms auditable evidence of competency that satisfies both internal compliance and external regulatory scrutiny.

Implementation: The 90-Day Sprint

We recommend a 90-day sprint to deploy the framework across the organisation:

The total investment for a mid-market firm (500-2,000 employees) typically ranges from £80,000 to £200,000, depending on the number of Tier 2 and Tier 3 participants. For context, the average FCA fine for a technology governance failure in 2024-25 was £4.2 million.

Data Residency and Tool Selection in Regulated Environments

Financial services firms face an additional layer of complexity: the AI tools their teams use must meet stringent data residency and security requirements. This is both a procurement decision and a training topic — staff need to understand why they can use some AI tools and not others.

Key considerations for financial services:

Need a financial-services-specific AI training programme that satisfies FCA, PRA, and SM&CR requirements? Spicy Advisory delivers role-specific AI training for banks, insurers, asset managers, and fintech firms — from frontline staff to the boardroom. Our programmes are built on The Spicy Financial Services AI Training Framework and include auditable competency assessments. Book a discovery call.

Frequently Asked Questions

Does the FCA require AI training for financial services firms?

The FCA does not mandate a specific AI training programme, but it does require firms to demonstrate that staff have appropriate competency for the AI systems they use or oversee. This obligation flows from multiple regulatory frameworks: the Consumer Duty requires staff to understand how AI affects customer outcomes; SS1/23 requires appropriate competency for model risk management; and the SM&CR requires senior managers to have adequate knowledge of the technology systems under their responsibility. In practice, firms without structured AI training programmes will struggle to demonstrate compliance during FCA supervisory visits and thematic reviews.

What AI governance frameworks apply to UK financial services?

UK financial services firms are subject to multiple overlapping frameworks. The PRA's SS1/23 covers model risk management for all models including AI and ML. The FCA's Consumer Duty applies to any AI system affecting customer outcomes. The SM&CR creates personal accountability for senior managers overseeing AI systems. The ICO's AI guidance applies to any AI processing personal data under UK GDPR. Additionally, the Bank of England's approach to AI in financial services, published in 2024, sets expectations for systemic risk management. Firms operating across UK and EU jurisdictions must also consider the EU AI Act's requirements for high-risk AI systems in financial services.

How should banks handle data residency for AI tools?

UK banks must ensure that AI tools processing customer data comply with UK GDPR data transfer requirements, FCA expectations on data security, and PRA operational resilience standards. In practice, this means selecting AI tools that offer UK-based data processing (such as Microsoft Azure UK regions), ensuring enterprise agreements explicitly prohibit the use of customer data for model training, implementing data classification systems that prevent sensitive data from entering unapproved AI tools, and maintaining comprehensive audit trails of all AI interactions involving customer data. Consumer-grade AI tools (such as free ChatGPT accounts) should be prohibited for any use involving customer or commercially sensitive data.

What does the SM&CR mean for AI accountability?

Under the Senior Managers and Certification Regime, specific individuals bear personal regulatory responsibility for the firm's AI systems. The Senior Manager responsible for technology, operations, or the specific business area using AI must be able to demonstrate that appropriate governance, risk management, and competency frameworks are in place. If an AI system causes customer harm, regulatory breach, or significant operational failure, the responsible Senior Manager may face personal enforcement action including fines, public censure, or prohibition from the industry. This means AI governance — including training — must have explicit board-level ownership and cannot be delegated entirely to technology or data teams.