The EU AI Act is not coming — it's here. The world's first comprehensive AI regulation entered into force in August 2024, and the compliance deadlines are arriving fast. Yet according to a 2025 survey by France Digitale, 67% of French SMEs and mid-market companies have no concrete plan for AI Act compliance. If you're a CEO, compliance manager, or HR director at a PME or ETI, this regulatory blind spot could cost you up to €35 million or 7% of your global annual turnover — whichever is higher.

By Toni Dos Santos, Co-Founder, Spicy Advisory

TL;DR — What You Need to Know: The EU AI Act (Regulation 2024/1689) applies to any company using AI across all 27 EU member states — not just tech companies that build it. If you use AI in HR, sales, finance, or customer service, you have legal obligations starting August 2026. Penalties reach up to €35M or 7% of global turnover. This guide covers the risk classification system, compliance timeline, obligations by company size (SMB, mid-market, enterprise), and provides the 5-step SPICY Compliance Framework to get ready. Talk to our team for tailored compliance guidance.

Why the AI Act Matters to Companies That Use AI — Not Just Those That Build It

There's a common misconception that the EU AI Act only affects technology companies that develop AI systems. This is dangerously wrong. The regulation applies along the entire AI value chain — including deployers, which is the legal term for companies that use AI systems in their operations. If your sales team uses an AI-powered CRM, if your HR department screens CVs with AI tools, or if your finance team relies on AI for credit scoring, you have obligations under the AI Act.

For French PMEs and ETIs, this is particularly significant. According to INSEE, 35% of French companies with 10 or more employees used AI in 2024, a figure that has likely grown substantially since. Many of these companies adopted AI tools without considering regulatory implications — and the compliance clock is now ticking.

The Risk Classification System: Understanding Where Your AI Falls

The AI Act introduces a tiered risk framework that determines your compliance obligations. Think of it like the CE marking system for products — but for artificial intelligence.

Unacceptable Risk (Prohibited)

These AI practices are banned outright since February 2, 2025:

If any of your AI tools fall into these categories, you must cease usage immediately. There is no grace period — the prohibition is already in force.

High Risk (Strict Obligations)

High-risk AI systems face the heaviest compliance requirements, enforceable from August 2, 2026. These include AI used in:

For deployers of high-risk systems, obligations include: conducting fundamental rights impact assessments, ensuring human oversight, maintaining logs, informing employees when AI is used in HR decisions, and cooperating with regulatory authorities.

Limited Risk (Transparency Obligations)

AI systems that interact directly with people must meet transparency requirements. This includes:

Minimal Risk (No Specific Obligations)

AI applications like spam filters, AI-assisted writing tools for internal use, or inventory optimization systems face no specific obligations under the AI Act — though general principles of responsible AI use still apply.

The Compliance Timeline: Dates Every Leader Must Know

The AI Act rolls out in phases. Here are the critical milestones:

  1. February 2, 2025: Prohibitions on unacceptable-risk AI take effect. All banned practices must have ceased.
  2. August 2, 2025: Rules for General-Purpose AI (GPAI) models apply. If you use foundation models like GPT-4, Claude, or Gemini, your providers must comply with transparency and documentation requirements.
  3. August 2, 2026: The bulk of the regulation takes effect. High-risk AI system obligations become enforceable. National supervisory authorities must be operational.
  4. August 2, 2027: Remaining provisions for high-risk AI systems embedded in other EU-regulated products take effect.

For most PMEs and ETIs, August 2026 is the critical deadline. That gives you less than 18 months from today to inventory your AI systems, assess risk levels, and implement compliance measures.

The Awareness Gap: Why PMEs and ETIs Are Exposed

Large enterprises — particularly CAC 40 companies — have mobilized legal and compliance teams around the AI Act since 2023. Mid-market and smaller companies have not. The data paints a stark picture:

This awareness gap creates real business risk. When enforcement begins in August 2026, regulatory authorities won't distinguish between companies that didn't know and companies that didn't care.

Enterprise, Mid-Market, and SMB: Different Scales, Different Challenges

The AI Act applies uniformly regardless of company size, but the compliance challenges vary significantly depending on your organization's scale and AI maturity.

Enterprise / Grands Groupes (250+ employees)

Large enterprises and CAC 40 companies typically have the resources for compliance — but face challenges of scale. With hundreds of AI systems deployed across departments, the biggest risk is shadow AI: teams adopting AI tools without central oversight. Enterprise organizations need centralized AI registries, cross-departmental governance committees, and systematic audit processes. The complexity of multi-country operations within the EU adds another layer, as national supervisory authorities may interpret enforcement differently. Executive AI literacy is critical to ensure board-level understanding of compliance obligations.

Mid-Market / ETI (50–250 employees)

ETIs are the most exposed segment. They have enough AI usage across departments to trigger meaningful obligations — particularly in HR, finance, and customer service — but often lack dedicated compliance infrastructure or legal teams with AI expertise. Only 12% have appointed someone responsible for AI governance. The opportunity: mid-market companies are small enough to move fast and implement changes across the organization quickly, but large enough to face real regulatory risk. This is the segment where proactive action delivers the highest ROI.

SMB / PME (under 50 employees)

The AI Act provides proportional fines for SMEs, but "proportional" can still mean hundreds of thousands of euros for a company with €5–10 million in revenue. For PMEs, the most common risk areas are AI-powered recruitment tools (high-risk category) and customer-facing chatbots (transparency obligations). The good news: most PME AI usage falls into minimal or limited risk categories, meaning compliance requirements are lighter. The priority is awareness — understanding which of your tools carry obligations and ensuring basic transparency requirements are met.

A European Regulation with National Implementation: France and Beyond

Unlike EU directives that require national transposition, the AI Act is a regulation — it applies directly and uniformly across all 27 EU member states from the same dates. A company compliant in France is compliant across the entire EU single market.

However, each member state must designate national supervisory authorities by August 2026. The enforcement landscape is taking shape:

For companies operating across EU borders, this means one compliance framework covers all markets — but you should monitor the enforcement practices of supervisory authorities in each country where you operate. Companies based outside the EU (UK, US, Switzerland) that place AI systems on the EU market or whose AI outputs affect EU citizens are also subject to the AI Act's extraterritorial provisions, similar to GDPR.

Le Cadre SPICY de Conformité AI Act: Your 5-Step Action Plan

At Spicy Advisory, we've developed a structured methodology to help PMEs and ETIs achieve AI Act compliance without paralysis. We call it the SPICY Compliance Framework — five actionable steps that take you from uncertainty to readiness.

S — Scan: Inventory Your AI Systems

You cannot comply with regulations for systems you don't know about. Start with a comprehensive AI inventory across all departments. Map every AI tool, model, and automated decision-making process in your organization. Include third-party SaaS tools with AI features — these count too. The output: a complete register of AI systems with their purpose, data inputs, and decision scope.

P — Prioritize: Classify by Risk Level

Apply the AI Act's risk classification to each system in your inventory. Focus first on potential prohibited practices (immediate action required) and high-risk systems (August 2026 deadline). Create a prioritized compliance roadmap based on risk level and deadline proximity. Don't try to tackle everything at once — sequence your efforts by regulatory urgency.

I — Implement: Build Compliance Measures

For each high-risk AI system, implement the required safeguards: fundamental rights impact assessments, human oversight mechanisms, logging and documentation, transparency measures for affected individuals. For limited-risk systems, ensure transparency obligations are met. Establish an internal AI policy that codifies acceptable use and governance procedures.

C — Control: Monitor and Audit

Compliance is not a one-time exercise. Establish ongoing monitoring processes: regular audits of AI system performance and compliance, incident reporting mechanisms, feedback loops from employees and affected individuals, documentation updates as AI systems evolve. Build this into your existing quality management or compliance infrastructure — don't create a parallel bureaucracy.

Y — Y former: Train Your Teams

The final step — and arguably the most important. Regulation means nothing if your people don't understand it. Train your leadership team on AI Act obligations and strategic implications. Train HR teams on high-risk obligations for AI in recruitment and evaluation. Train all employees on transparency requirements and responsible AI use. This is not a one-time workshop — it's an ongoing literacy program. According to France Compétences, only 15% of French companies have integrated AI literacy into their training plans for 2026, despite growing regulatory requirements.

"The AI Act doesn't ask companies to stop using AI. It asks them to use it responsibly, transparently, and with proper oversight. For most PMEs and ETIs, the compliance gap is not about technology — it's about awareness and process." — Toni Dos Santos, Co-Founder, Spicy Advisory

Sanctions: What's Really at Stake

The AI Act's penalty structure is designed to get attention:

For SMEs and startups, the regulation provides for proportional fines — but "proportional" to a company with €50 million in revenue still means potentially millions of euros. The reputational damage of an enforcement action may be even more costly than the fine itself.

Practical Steps You Can Take This Week

You don't need a six-month project to start. Here are immediate actions:

  1. Designate an AI Act owner. Someone in your organization — whether it's your DPO, compliance officer, or a senior manager — needs to own this topic.
  2. Run a quick AI inventory. Send a simple survey to department heads: "What AI tools does your team use?" You'll likely be surprised by the answers.
  3. Check for prohibited uses. Cross-reference your inventory against the prohibited practices list. If you find any, stop immediately.
  4. Brief your executive team. Share this article or a similar summary with your leadership. Compliance starts with awareness.
  5. Assess your biggest HR AI risks. If you use AI in recruitment, performance reviews, or workforce planning, these are your highest-risk areas. Prioritize them for compliance review.

For companies that want to build a broader AI governance framework, the AI Act compliance process can serve as the foundation for a more comprehensive program that covers both regulatory requirements and operational best practices.

Don't wait for enforcement to start preparing. Spicy Advisory's AI Governance Training program helps PMEs and ETIs build AI Act compliance into their operations — practically, efficiently, and without legal jargon overload. Book a discovery call or explore our AI training programs by role.

Frequently Asked Questions

Is my company affected by the EU AI Act?

Almost certainly yes, if you operate in the EU and use AI in any form. The AI Act applies not only to companies that develop AI systems but also to "deployers" — organizations that use AI systems in their professional activities. If your teams use AI-powered tools for recruitment, customer service, data analysis, content creation, or any other business function, you have obligations under the AI Act. Even using third-party SaaS products with embedded AI features counts. The scope is deliberately broad: if AI influences decisions that affect people, the regulation applies.

What are the penalties under the EU AI Act?

The AI Act establishes a three-tier penalty structure. Violations involving prohibited AI practices carry fines of up to €35 million or 7% of global annual turnover, whichever is higher. Non-compliance with high-risk AI obligations can result in fines of up to €15 million or 3% of global turnover. Providing incorrect or misleading information to regulatory authorities carries fines of up to €7.5 million or 1% of turnover. For SMEs and startups, fines are proportional but can still represent millions of euros. Beyond financial penalties, the reputational damage from a public enforcement action can significantly impact business relationships and market confidence.

When does the EU AI Act enter into force?

The AI Act entered into force on August 1, 2024, but its provisions apply in phases. Prohibitions on unacceptable-risk AI practices took effect on February 2, 2025 — these are already enforceable. Rules for General-Purpose AI models (like GPT-4 and Claude) apply from August 2, 2025. The main body of the regulation, including obligations for high-risk AI systems, becomes enforceable on August 2, 2026. Final provisions for AI systems embedded in EU-regulated products take effect August 2, 2027. For most companies, August 2026 is the key compliance deadline.

Do I need a DPO for AI Act compliance?

The AI Act does not specifically require appointing a Data Protection Officer (DPO) for AI compliance. However, if you already have a DPO under GDPR, they are a natural candidate to coordinate AI Act compliance given the significant overlap between data protection and AI regulation — particularly around fundamental rights impact assessments, transparency obligations, and data governance. For PMEs and ETIs without a DPO, designating an "AI compliance lead" is recommended. This person doesn't need to be a lawyer — they need to understand your AI systems, the regulatory framework, and have the authority to drive compliance processes across departments.

How does the EU AI Act differ from GDPR?

The GDPR protects personal data while the AI Act regulates AI systems regardless of whether they process personal data. They are complementary — an AI system processing personal data must comply with both. The AI Act adds requirements around risk assessment, transparency, human oversight, and system documentation that go beyond data protection. Companies with strong GDPR compliance are better positioned for AI Act compliance, as many governance processes overlap.

Does the EU AI Act apply outside of France?

Yes, the AI Act applies across all 27 EU member states as a directly applicable regulation. It also has extraterritorial reach — any company placing AI systems on the EU market or whose AI outputs are used in the EU must comply, regardless of where the company is headquartered. This is similar to GDPR's extraterritorial scope. Companies in the UK, US, or other non-EU countries that serve EU customers or employ EU-based workers are affected.

What is a deployer under the AI Act?

A deployer is any natural or legal person that uses an AI system in a professional capacity. This includes companies that use third-party AI tools like ChatGPT, Microsoft Copilot, or AI-powered HR software in their business operations. Deployers have specific obligations under the AI Act, particularly for high-risk systems: conducting fundamental rights impact assessments, ensuring human oversight, maintaining transparency with affected individuals, and keeping usage logs. Most SMBs, mid-market companies, and enterprises are deployers, not providers.

How should I prepare for AI Act compliance if I use AI tools from US providers?

AI providers like OpenAI, Microsoft, Google, and Anthropic operating in the EU market must meet provider obligations including model documentation, risk assessments, and transparency requirements. As a deployer, your obligations remain the same regardless of where your AI provider is based. You should verify that your providers are taking steps toward AI Act compliance, include AI Act compliance requirements in your procurement contracts, and maintain your own documentation of how you use these systems in your operations.

Sources and References:

  • European Parliament and Council, Regulation (EU) 2024/1689 — the Artificial Intelligence Act (2024)
  • France Digitale, "AI Readiness Survey: French SMEs and the AI Act" (2025)
  • McKinsey & Company, "France AI Survey: Enterprise AI Governance Maturity" (2025)
  • Salesforce, "Global AI Skills Report" (2025)
  • INSEE, "Adoption de l'intelligence artificielle par les entreprises en France" (2024)
  • CNIL, Rapport annuel d'activité 2023
  • France Compétences, "Baromètre de la formation professionnelle et des compétences IA" (2026)
  • European Commission, AI Act Implementation Guidelines and FAQ (2025)

About Spicy Advisory

Spicy Advisory helps SMBs, mid-market companies, and enterprises across France and Europe navigate AI adoption through hands-on training, governance consulting, and compliance support. Our AI Governance Training program bridges the gap between AI regulation and practical implementation — no junior consultants, no legal jargon, just actionable results from day one.

Book an AI Act Compliance Assessment