You have the budget. You have the mandate. Now you need a provider — and the UK AI training market is a minefield. Since 2023, corporate spending on AI training in the UK has grown by approximately 340%, creating a gold rush of providers ranging from world-class practitioners to yesterday's digital marketing consultants who rebranded overnight. As an L&D director, the wrong choice does not just waste your training budget — it inoculates your workforce against AI adoption by delivering a poor first experience. This guide gives you the eight questions that separate the providers worth hiring from the ones worth avoiding.
By Toni Dos Santos, Co-Founder, Spicy Advisory
Why This Guide Exists
The UK government has identified AI skills as a critical national priority. The Department for Science, Innovation and Technology (DSIT) estimates that AI could contribute £400 billion to the UK economy by 2030, but only if the workforce can keep pace with the technology. The National AI Strategy's workforce pillar explicitly calls for massive upskilling — and the private sector is responding with spending.
The problem is not investment. It is quality. Fewer than 40% of UK companies report being satisfied with the ROI of their AI training programmes, according to the CIPD's 2025 Learning at Work Survey. The average AI training engagement produces a burst of enthusiasm that fades within six weeks, leaving behind a handful of power users, a majority who revert to old habits, and a finance director asking uncomfortable questions about what exactly £80,000 bought.
The UK corporate training market is projected to reach £42 billion by 2027, with AI training as its fastest-growing segment. That growth has attracted every type of provider: traditional L&D companies bolting AI modules onto existing catalogues, technology vendors disguising product training as skills development, solo consultants with ChatGPT experience and a Canva-designed website, and — yes — genuine experts who combine deep AI knowledge with proven adult learning methodology.
Your job is to tell them apart. Here is how.
The Spicy 8-Point AI Training Provider Scorecard
We developed this scorecard after reviewing dozens of AI training engagements across the UK — the ones that delivered lasting change and the ones that did not. Each question targets a specific failure mode we have observed repeatedly. Score each provider on a 1-5 scale for each question; any provider scoring below 30 out of 40 should be reconsidered.
Question 1: Do They Customise by Role and Department, or Deliver One-Size-Fits-All?
This is the single most important differentiator. A finance team needs to learn AI applications for forecasting, reconciliation, and audit trail management. A marketing team needs prompt engineering for content creation, audience analysis, and campaign optimisation. An HR team needs AI for recruitment screening, policy compliance, and employee experience analysis.
If a provider's response to "What does the training look like for our finance team versus our marketing team?" is essentially "We cover the same core content with everyone" — walk away. Generic AI training produces generic results, which is to say no results. The best providers will ask for your org chart, your tech stack, and your departmental KPIs before they even propose a curriculum.
Our own approach at Spicy Advisory always begins with departmental workflow mapping. We wrote about why this matters in our guide to AI training that actually sticks.
Question 2: What Happens After the Training Day?
The training day — or week — is the easy part. The hard part is the 90 days that follow, when participants try to apply what they learned to real work, hit obstacles, get frustrated, and quietly revert to their old methods.
The best AI training providers build post-training embedding into their core offering. This typically includes: follow-up coaching sessions at 30, 60, and 90 days; a dedicated Slack or Teams channel for ongoing questions; office hours with trainers for real-world problem-solving; and refresher workshops when new tools or capabilities launch.
If a provider's engagement ends when the trainer leaves the building, you are buying an event, not a transformation. Ask specifically: "What does the post-training support look like, and for how long?" If they cannot describe a concrete embedding programme, they are not serious about behavioural change.
Question 3: Can They Show Behavioural Change Metrics, Not Just Satisfaction Scores?
Nearly every training provider can show you a slide deck of satisfaction scores. "94% of participants rated the training as excellent or very good." This tells you nothing about whether anyone actually changed how they work.
The metrics that matter are:
- AI tool adoption rates — What percentage of trained employees are actively using AI tools 90 days after training?
- Time savings per workflow — Can they demonstrate measurable reduction in time spent on specific tasks?
- Quality improvements — Are AI-augmented outputs measurably better than pre-training outputs?
- Governance compliance — Are trained employees following AI usage policies?
- Manager confidence — Do line managers feel equipped to support and evaluate AI-augmented work?
Ask providers for case studies with these metrics. If they can only offer satisfaction scores and anecdotal testimonials, their impact measurement is not mature enough for a serious corporate programme. For more on measuring AI initiatives, see our CFO's guide to measuring AI ROI.
Question 4: Do They Cover AI Governance and Responsible Use, or Just Tools?
Any provider can teach your team to write better prompts. The question is whether they also teach your team when not to use AI, how to identify hallucinated outputs, what data can and cannot be fed into which tools, and how to maintain audit trails for AI-assisted decisions.
With 80% of UK organisations citing ethics as the most significant hurdle to AI adoption, governance training is not optional — it is the foundation that makes everything else safe. A provider who treats governance as a 30-minute module tacked onto the end of a tools workshop is a provider who does not understand the UK regulatory landscape.
The best providers weave governance throughout every module. They do not teach prompt engineering in a vacuum; they teach it within the context of data classification policies, intellectual property considerations, and output verification protocols. Our AI governance framework for mid-market companies outlines what responsible AI use training should cover.
Question 5: Are Their Trainers Practitioners or Just Presenters?
This question exposes a structural problem in the AI training market. Many providers employ trainers who are skilled presenters and facilitators but have never actually implemented AI in a business context. They can demonstrate tools impressively on stage but cannot troubleshoot a real workflow integration or advise on a genuine edge case.
Ask to meet the actual trainers — not the sales team — and probe their experience:
- What AI tools do you personally use in your own work, and for what tasks?
- Describe an AI implementation you have led or contributed to in a business context.
- What is the most common mistake you see when [specific department] tries to adopt AI?
- How do you handle it when a participant's use case does not fit neatly into the training curriculum?
A practitioner-trainer can answer these questions with specific, detailed examples from their own experience. A presenter-trainer will give generic, framework-level answers. The difference in training quality is enormous.
Question 6: Can They Scale from a Pilot Team to the Whole Organisation?
Many providers deliver excellent training to a group of 15 enthusiastic volunteers. The question is whether they can deliver the same quality to 150 people across multiple departments, including the sceptics, the technophobes, and the passive resistors who did not volunteer for the pilot.
Scaling AI training requires:
- Train-the-trainer capabilities — Can they certify internal champions to sustain momentum after the formal programme ends?
- Multiple delivery formats — Can they offer in-person, virtual, and hybrid options to accommodate distributed teams?
- Differentiated tracks — Can they design separate curricula for different skill levels, from complete beginners to power users?
- Change management integration — Do they work with your change management and internal communications teams to drive adoption beyond the training room?
If a provider's maximum group size is 20 and they have never delivered a multi-department rollout, they may be perfect for a pilot but incapable of scaling. Know which phase you are in, and choose accordingly. Our article on AI change management explores the scaling challenge in depth.
Question 7: Do They Stay Tool-Agnostic, or Push a Single Platform?
Beware of AI training providers who are also resellers or partners of specific AI platforms. Their commercial incentive is to train you on the tool they profit from, not the tool that best fits your workflows.
The best providers are tool-agnostic. They teach principles and skills that apply across platforms — prompt engineering fundamentals, output evaluation techniques, workflow integration patterns — and then help you evaluate which tools best fit your specific context. They should be as comfortable training on Claude as on ChatGPT, on Copilot as on Gemini.
Ask directly: "Do you have a commercial relationship with any AI tool vendor?" And: "If we decided to use a different tool than the one you demonstrated in training, how would that affect your curriculum?" An agnostic provider welcomes these questions. A platform-dependent provider gets uncomfortable. For our own comparison of enterprise AI tools, see our ChatGPT Enterprise vs Copilot vs Gemini comparison.
Question 8: Can They Demonstrate ROI in Terms a CFO Would Accept?
This is the question that separates serious providers from hobbyists. Training is an investment, and your CFO wants to see returns expressed in business terms: hours saved, revenue influenced, cost avoided, risk reduced.
A strong provider can:
- Build a pre-training baseline measurement of the workflows you are targeting
- Define specific, quantifiable KPIs tied to business outcomes
- Provide post-training measurement at 30, 60, and 90 days
- Calculate ROI using a methodology your finance team can validate
- Present results in a format suitable for board reporting
If a provider cannot articulate how they would measure ROI before the training starts, they cannot measure it after. This is not about perfection — AI training ROI can be genuinely difficult to isolate. But a provider who has thought seriously about measurement will have a methodology, even an imperfect one. A provider who has not thought about it will change the subject to satisfaction scores.
Red Flags: Warning Signs of a Bad AI Training Provider
Beyond the eight scorecard questions, watch for these warning signs that indicate a provider you should avoid:
Recycled Content
If their training materials feature screenshots from 2023, reference GPT-3.5 as cutting-edge, or include examples that are clearly lifted from YouTube tutorials — they are selling commodity content, not expertise. The AI landscape changes quarterly. Materials should be updated continuously, not annually.
No Post-Training Support
"We deliver the training and you take it from there" is a sentence that should end the conversation. Without post-training embedding, research consistently shows that 70% of training content is forgotten within 24 hours (Ebbinghaus curve). Any provider who does not account for this is either ignorant of adult learning science or indifferent to your outcomes.
Vanity Metrics Only
"We have trained 50,000 people in AI" is not a quality indicator. It is a volume indicator. Ask what happened to those 50,000 people. How many are still using AI tools 90 days later? How much time are they saving? What business outcomes improved? If the answer is "we do not track that" — the training is a product, not a programme.
No Industry or Sector Experience
AI applications in financial services are fundamentally different from AI applications in healthcare, manufacturing, or professional services. A provider who claims expertise across all sectors without demonstrating depth in any is likely operating at a superficial level. Ask for case studies in your specific sector.
Celebrity Trainer Model
If the provider's entire value proposition rests on one charismatic individual who delivers all the keynotes and workshops, you have a scalability problem and a concentration risk. What happens when that person is unavailable? Can the rest of the team deliver at the same level? Meet the full delivery team, not just the founder.
What Good AI Training Looks Like in Practice
Having described what to avoid, here is what a well-structured AI training programme looks like for a UK mid-market company:
Phase 1: Discovery and Design (2-3 weeks)
- Stakeholder interviews with leadership, department heads, and end users
- Workflow mapping of target departments to identify high-impact AI use cases
- Current-state skills assessment to establish baseline AI literacy
- Custom curriculum design aligned to business objectives and departmental KPIs
Phase 2: Delivery (2-4 weeks)
- Executive briefing for C-suite and senior leadership (half day)
- Department-specific workshops with hands-on, workflow-integrated exercises (1-2 days per department)
- Governance and responsible AI module for all participants (half day)
- Champion certification for internal AI advocates (1 day)
Phase 3: Embedding (90 days)
- Monthly coaching sessions with trained departments
- Ongoing access to trainer support channel
- 30/60/90-day measurement against pre-defined KPIs
- Quarterly refresh sessions incorporating new tools and capabilities
This structure typically costs between £30,000 and £120,000 depending on company size, number of departments, and depth of customisation. Per-employee, it works out to approximately £200-400 for a comprehensive programme, compared to £50-100 for a generic one-day workshop — but the comprehensive programme delivers 5-10x more measurable impact.
The UK Government AI Skills Landscape
Any AI training procurement should consider the broader UK skills landscape. The government's approach includes:
- AI Skills Bootcamps: Free or heavily subsidised introductory programmes, useful as a baseline but not sufficient for organisational transformation
- AI Upskilling Fund: Financial support for SMEs investing in AI training, potentially offsetting 30-50% of your programme costs
- National AI Strategy workforce pillar: Sets the policy direction that regulators are following — understanding this helps you anticipate what skills will become mandatory, not just desirable
- Sector-specific initiatives: The FCA, NHS, and MoD all have AI skills frameworks that may apply to your industry
A good training provider will be fluent in these initiatives and help you leverage available support. A great provider will already be an accredited delivery partner for relevant government programmes.
How to Run the Procurement Process
Based on our experience advising UK companies on AI training procurement, here is a recommended process:
- Define your objectives first. What business outcomes do you want AI training to enable? Express these in terms your CFO would recognise.
- Shortlist 3-5 providers using the 8-Point Scorecard as your evaluation framework.
- Request customised proposals — not off-the-shelf brochures. Any provider who cannot write a proposal specific to your organisation is not customising their training either.
- Meet the actual delivery team, not the sales team. Ask the practitioner questions from Question 5.
- Request references from similar organisations — similar size, sector, and AI maturity level. Call those references and ask specifically about post-training outcomes and provider responsiveness.
- Negotiate post-training support into the contract, not as an optional add-on. Embedding should be a contractual deliverable with defined milestones.
- Define measurement criteria upfront and make a portion of the fee contingent on achieving agreed outcomes.
"The best AI training investment you will ever make is not finding the cheapest provider or the most famous one. It is finding the one that takes as much care with what happens after the training as they do with what happens during it." — Toni Dos Santos, Co-Founder, Spicy Advisory
Looking for an AI training provider that checks all 8 boxes? Spicy Advisory delivers customised, role-specific AI training for UK mid-market companies with built-in governance, 90-day embedding, and CFO-grade ROI measurement. We are practitioners, not presenters — and we do not end when the workshop does. Book a discovery call.
Frequently Asked Questions
How much does AI training cost in the UK?
AI training costs in the UK vary significantly based on depth and customisation. Generic one-day workshops typically cost £50-100 per person, while comprehensive programmes including discovery, customised delivery, and 90-day embedding run £200-400 per person. For a mid-market company of 100-200 employees, expect total programme costs of £30,000-£120,000 for a full organisational rollout. The UK Government's AI Upskilling Fund may offset 30-50% of costs for eligible SMEs. The critical consideration is not cost per day but cost per behaviour changed — a £400/person programme that delivers measurable workflow adoption is dramatically better value than a £100/person workshop that produces no lasting change.
What should I look for in an AI training provider?
The eight essential criteria are: role-specific customisation rather than generic content; structured post-training embedding and support lasting at least 90 days; behavioural change metrics rather than satisfaction scores; integrated AI governance and responsible use training; practitioner-trainers with real implementation experience; scalability from pilot teams to full organisations; tool-agnostic methodology not tied to a single platform; and demonstrable ROI measurement in terms a CFO would accept. Additionally, watch for red flags including recycled content, no post-training support, vanity metrics only, no sector-specific experience, and dependency on a single celebrity trainer.
How long should an AI training programme last?
An effective AI training programme for a UK mid-market company typically spans 4-6 months from discovery to embedded change. This includes 2-3 weeks of discovery and curriculum design, 2-4 weeks of active delivery (workshops, hands-on sessions, governance training), and 90 days of post-training embedding including coaching, support channels, and measurement. The common mistake is treating AI training as a one-day or one-week event. Research shows that 70% of training content is forgotten within 24 hours without reinforcement. The embedding phase — where participants apply skills to real work with ongoing support — is where the actual behavioural change happens.
How do I measure AI training effectiveness?
Measure AI training effectiveness across five dimensions: AI tool adoption rates (what percentage of trained employees actively use AI tools 90 days post-training), time savings per workflow (measurable reduction in time spent on targeted tasks), quality improvements (are AI-augmented outputs measurably better), governance compliance (are employees following AI usage policies), and manager confidence (do line managers feel equipped to support AI-augmented teams). Satisfaction scores alone are vanity metrics — they measure whether people enjoyed the training, not whether it changed their work. Establish baseline measurements before training begins, then measure at 30, 60, and 90 days post-training to track genuine behavioural change and calculate ROI in terms your CFO will accept.