Four months and $162,000 in annual licensing costs. That's what a mid-market professional services firm had invested in Microsoft Copilot when they called us. They had 450 licenses deployed. Active weekly usage sat at 18%. Leadership was asking whether to renew. This is the story of how a structured adoption program moved that number to 72% in eight weeks — and what it taught us about why enterprise AI tools gather dust.

Toni Dos Santos is Co-Founder of Spicy Advisory, where he designs and delivers enterprise AI adoption programs that turn shelfware into measurable productivity gains.

The Starting Point: 450 Licenses, 18% Usage, Zero Visibility

The company is a professional services firm with roughly 600 employees across marketing, finance, operations, and client delivery. They had rolled out Microsoft 365 Copilot to 450 employees four months earlier. The rollout followed the standard playbook: company-wide email announcement, a 45-minute webinar, a link to Microsoft's training resources, and an IT helpdesk ticket category for Copilot issues.

When we pulled the usage data from the Microsoft 365 admin center, the picture was stark. Only 81 of 450 licensed users had touched Copilot in the past 30 days. Of those 81, just 34 used it more than twice a week. The rest had tried it once or twice and stopped. This pattern is not unusual. Gartner's 2025 Digital Workplace Survey found that enterprises achieve only 20-30% sustained usage of AI productivity tools within the first six months of deployment. McKinsey's Superagency report confirmed it: 92% of companies plan to increase AI spending, but only 1% have reached AI maturity.

The firm's COO put it bluntly: "We bought a fleet of race cars and nobody knows how to drive them."

Week 1-2: The Audit — Discovering What Was Actually Happening

We started where we always start: not with training, but with understanding. We ran 30-minute workflow mapping sessions with team leads from each of the three target departments — marketing (42 people), finance (38 people), and operations (65 people). We asked one question per session: "Walk me through your three most time-consuming recurring tasks this week."

What we found was telling. Marketing spent an average of 5.3 hours per person per week on report reformatting, meeting summary write-ups, and first-draft content creation — all tasks where Copilot could deliver immediate value. Finance burned 4.8 hours weekly on data consolidation across spreadsheets, variance commentary writing, and email drafting for stakeholder updates. Operations lost 6.1 hours to status report compilation, process documentation updates, and vendor communication templates.

But the audit also revealed why people had stopped using Copilot. Three patterns emerged:

The audit produced a prioritized list of 14 high-impact use cases across the three departments, ranked by estimated time savings and implementation ease. We selected the top 5 for immediate training.

Week 3-4: Department-Specific Training Rollout

Generic AI training is the single biggest waste of enterprise L&D budget right now. A Forrester study on enterprise software adoption found that role-specific training increases sustained usage by 3.4x compared to generic onboarding. So we built three separate 90-minute sessions — one per department — each focused exclusively on that team's prioritized use cases.

Marketing (42 people, 2 sessions): We trained on meeting summarization in Teams, first-draft generation in Word using Copilot with company style guides loaded as reference documents, and PowerPoint deck restructuring from raw notes. The hands-on exercise: every participant summarized their last real team meeting using Copilot and compared it to their manual notes. Average time to produce a usable summary dropped from 22 minutes to 4 minutes during the session.

Finance (38 people, 2 sessions): We focused on Excel Copilot for variance analysis narratives, Outlook Copilot for stakeholder update drafts, and Word Copilot for quarterly commentary generation. The hands-on exercise: participants used Copilot to generate variance commentary on a real (anonymized) monthly report. The key insight for finance was teaching evaluation frameworks — how to verify AI-generated numbers before sending.

Operations (65 people, 3 sessions): We trained on Teams Copilot for cross-functional meeting recaps, Word Copilot for process documentation updates, and Outlook Copilot for vendor communication templates. The hands-on exercise: each participant built a reusable prompt template for their most common vendor email type.

Every participant left with at least one working workflow they could use the next morning. That was the non-negotiable benchmark. If someone walked out without a ready-to-use process, the session failed.

Week 5-6: The Embedding Cadence That Made Habits Stick

Training creates awareness. Embedding creates habits. Behavioral science research from University College London shows that new habits take an average of 66 days to form, but the critical window is the first two weeks after initial exposure. Miss that window and reversion to old workflows is almost guaranteed.

Here is the embedding cadence we ran:

Week 5 — Active experimentation: Every trained employee committed to using at least one AI workflow on a real task each day. Results were posted in a dedicated Teams channel — one per department. We monitored the channel and provided async feedback within 4 hours. The channel created social proof: when someone in marketing posted that they summarized a 90-minute client call in 3 minutes, six colleagues tried it the same day.

Week 6 — Office hours and advanced tips: We ran 30-minute live troubleshooting sessions per department. These were not presentations. They were pure Q&A, focused on the specific blockers people hit during Week 5. The most common issue across all departments: people were writing prompts that were too vague. We introduced the "context-task-format" prompting framework and watched output quality jump immediately.

During this phase, we also addressed the permission gap directly. Working with the COO, we published a one-page AI usage policy that explicitly stated: "Using AI tools to draft, summarize, and analyze is encouraged for all internal and client-facing work, provided outputs are reviewed before distribution." That single document unlocked 28% of the workforce that had been sitting on the sidelines.

"The technology was never the bottleneck. The bottleneck was that nobody told people it was okay to use it, and nobody showed them how to use it on their actual work. Fix those two things and adoption takes care of itself." — Toni Dos Santos, Co-Founder, Spicy Advisory

Week 7-8: Results and Measurement

At the end of week 8, we pulled usage data from the Microsoft 365 admin center again and ran a structured survey across all three departments. The numbers:

The financial impact was straightforward. At 6.2 hours saved per week across 324 active users, that is roughly 2,009 hours recovered per week. At a blended cost of $55 per hour for professional services staff, the weekly productivity gain was approximately $110,000. Against the annual licensing cost of $162,000 and the program investment, the ROI turned positive within the first month of sustained usage. Deloitte's 2026 State of AI report found that organizations with structured adoption programs see 2.3x the productivity impact of those relying on self-service rollouts — this engagement confirmed that ratio.

Key Takeaways for Your Own AI Adoption Program

This case study is an anonymized composite, but every data point reflects real patterns we see across mid-market engagements. Here are the five lessons that transfer to any enterprise AI rollout:

1. Audit before you train. You cannot design effective training without understanding what people actually do. The 30-minute workflow mapping sessions cost almost nothing and changed everything about how we structured the program.

2. Role-specific training is non-negotiable. Generic AI workshops produce high satisfaction scores and near-zero behavior change. Build separate sessions for separate roles, using each team's real tasks and real data.

3. The embedding phase is where adoption lives or dies. A 90-minute training session does not change years of work habits. The 2-week embedding cadence — daily practice, async feedback, live office hours — is what turns a good session into a lasting behavior shift.

4. Publish a clear AI usage policy. If people are unsure whether using AI is "allowed," they will default to not using it. A simple, explicit policy removes the single largest silent blocker to adoption.

5. Measure what matters. Active weekly usage rate, time saved per workflow, and independent use case discovery. Forget license deployment counts and satisfaction scores — they tell you nothing about whether AI is actually changing how work gets done.

Want results like these for your organization? Spicy Advisory runs structured AI adoption programs for mid-market and enterprise teams — from audit through embedding. Every engagement is measured by usage rates and time saved, not satisfaction scores. Book a discovery call to discuss your AI adoption challenge.

Frequently Asked Questions

How long does a typical enterprise AI adoption program take?

A structured program covering audit, training, and embedding typically runs 6-8 weeks. The audit phase takes 1-2 weeks, role-specific training takes 1-2 weeks, and the embedding cadence runs 2-4 weeks. Rushing the embedding phase is the most common mistake — it is where lasting habits form.

What is a good AI adoption rate target after training?

Aim for 40% or higher weekly active usage within 6 weeks of completing the program. Top-performing programs reach 60-75%. Anything below 30% after the embedding phase signals a structural issue — typically generic training content, missing management reinforcement, or an unclear usage policy.

Does this approach work for tools other than Microsoft Copilot?

Yes. The audit-train-embed framework is tool-agnostic. We have run the same program structure for organizations using ChatGPT Enterprise, Google Gemini for Workspace, and multi-tool environments. The methodology focuses on workflow change, not tool features.

How do you measure the ROI of an AI adoption program?

We track three primary metrics: weekly active usage rate (percentage of licensed users engaging with AI tools at least twice per week), time saved per user per week (measured via structured survey and usage analytics), and independent use case discovery (number of new AI workflows teams create without trainer guidance). These feed directly into an hours-saved-times-blended-cost ROI calculation.