The EU AI Act is not coming — it's here. The world's first comprehensive AI regulation entered into force in August 2024, and the compliance deadlines are arriving fast. Yet according to a 2025 survey by France Digitale, 67% of French SMEs and mid-market companies have no concrete plan for AI Act compliance. If you're a CEO, compliance manager, or HR director at a PME or ETI, this regulatory blind spot could cost you up to €35 million or 7% of your global annual turnover — whichever is higher.
By Toni Dos Santos, Co-Founder, Spicy Advisory
Why the AI Act Matters to Companies That Use AI — Not Just Those That Build It
There's a common misconception that the EU AI Act only affects technology companies that develop AI systems. This is dangerously wrong. The regulation applies along the entire AI value chain — including deployers, which is the legal term for companies that use AI systems in their operations. If your sales team uses an AI-powered CRM, if your HR department screens CVs with AI tools, or if your finance team relies on AI for credit scoring, you have obligations under the AI Act.
For French PMEs and ETIs, this is particularly significant. According to INSEE, 35% of French companies with 10 or more employees used AI in 2024, a figure that has likely grown substantially since. Many of these companies adopted AI tools without considering regulatory implications — and the compliance clock is now ticking.
The Risk Classification System: Understanding Where Your AI Falls
The AI Act introduces a tiered risk framework that determines your compliance obligations. Think of it like the CE marking system for products — but for artificial intelligence.
Unacceptable Risk (Prohibited)
These AI practices are banned outright since February 2, 2025:
- Social scoring systems that evaluate individuals based on personal characteristics or behavior
- AI that exploits vulnerabilities of specific groups (age, disability, economic situation)
- Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
- Emotion recognition in workplaces and educational institutions
- Untargeted scraping of facial images from the internet or CCTV
If any of your AI tools fall into these categories, you must cease usage immediately. There is no grace period — the prohibition is already in force.
High Risk (Strict Obligations)
High-risk AI systems face the heaviest compliance requirements, enforceable from August 2, 2026. These include AI used in:
- Employment and HR: CV screening, candidate ranking, performance evaluation, promotion decisions
- Creditworthiness and insurance: AI-driven credit scoring, risk assessment, pricing
- Education: Student assessment, admissions decisions
- Critical infrastructure: Energy, water, transport management systems
- Law enforcement and justice: Predictive policing, evidence assessment
For deployers of high-risk systems, obligations include: conducting fundamental rights impact assessments, ensuring human oversight, maintaining logs, informing employees when AI is used in HR decisions, and cooperating with regulatory authorities.
Limited Risk (Transparency Obligations)
AI systems that interact directly with people must meet transparency requirements. This includes:
- Chatbots must disclose they are AI, not human
- AI-generated content (deepfakes, synthetic text) must be labeled
- Emotion recognition or biometric categorization systems must inform users
Minimal Risk (No Specific Obligations)
AI applications like spam filters, AI-assisted writing tools for internal use, or inventory optimization systems face no specific obligations under the AI Act — though general principles of responsible AI use still apply.
The Compliance Timeline: Dates Every Leader Must Know
The AI Act rolls out in phases. Here are the critical milestones:
- February 2, 2025: Prohibitions on unacceptable-risk AI take effect. All banned practices must have ceased.
- August 2, 2025: Rules for General-Purpose AI (GPAI) models apply. If you use foundation models like GPT-4, Claude, or Gemini, your providers must comply with transparency and documentation requirements.
- August 2, 2026: The bulk of the regulation takes effect. High-risk AI system obligations become enforceable. National supervisory authorities must be operational.
- August 2, 2027: Remaining provisions for high-risk AI systems embedded in other EU-regulated products take effect.
For most PMEs and ETIs, August 2026 is the critical deadline. That gives you less than 18 months from today to inventory your AI systems, assess risk levels, and implement compliance measures.
The Awareness Gap: Why PMEs and ETIs Are Exposed
Large enterprises — particularly CAC 40 companies — have mobilized legal and compliance teams around the AI Act since 2023. Mid-market and smaller companies have not. The data paints a stark picture:
- 67% of French PMEs have no AI Act compliance roadmap (France Digitale, 2025)
- Only 12% of ETIs have appointed someone responsible for AI governance (McKinsey France AI Survey, 2025)
- 73% of French professionals feel under-skilled in AI (Salesforce, 2025), which extends to regulatory knowledge
- Meanwhile, CNIL received over 16,000 complaints in 2023 — demonstrating that French regulators are actively engaged and that citizens are increasingly aware of their rights regarding AI and data
This awareness gap creates real business risk. When enforcement begins in August 2026, regulatory authorities won't distinguish between companies that didn't know and companies that didn't care.
Le Cadre SPICY de Conformité AI Act: Your 5-Step Action Plan
At Spicy Advisory, we've developed a structured methodology to help PMEs and ETIs achieve AI Act compliance without paralysis. We call it the SPICY Compliance Framework — five actionable steps that take you from uncertainty to readiness.
S — Scan: Inventory Your AI Systems
You cannot comply with regulations for systems you don't know about. Start with a comprehensive AI inventory across all departments. Map every AI tool, model, and automated decision-making process in your organization. Include third-party SaaS tools with AI features — these count too. The output: a complete register of AI systems with their purpose, data inputs, and decision scope.
P — Prioritize: Classify by Risk Level
Apply the AI Act's risk classification to each system in your inventory. Focus first on potential prohibited practices (immediate action required) and high-risk systems (August 2026 deadline). Create a prioritized compliance roadmap based on risk level and deadline proximity. Don't try to tackle everything at once — sequence your efforts by regulatory urgency.
I — Implement: Build Compliance Measures
For each high-risk AI system, implement the required safeguards: fundamental rights impact assessments, human oversight mechanisms, logging and documentation, transparency measures for affected individuals. For limited-risk systems, ensure transparency obligations are met. Establish an internal AI policy that codifies acceptable use and governance procedures.
C — Control: Monitor and Audit
Compliance is not a one-time exercise. Establish ongoing monitoring processes: regular audits of AI system performance and compliance, incident reporting mechanisms, feedback loops from employees and affected individuals, documentation updates as AI systems evolve. Build this into your existing quality management or compliance infrastructure — don't create a parallel bureaucracy.
Y — Y former: Train Your Teams
The final step — and arguably the most important. Regulation means nothing if your people don't understand it. Train your leadership team on AI Act obligations and strategic implications. Train HR teams on high-risk obligations for AI in recruitment and evaluation. Train all employees on transparency requirements and responsible AI use. This is not a one-time workshop — it's an ongoing literacy program. According to France Compétences, only 15% of French companies have integrated AI literacy into their training plans for 2026, despite growing regulatory requirements.
"The AI Act doesn't ask companies to stop using AI. It asks them to use it responsibly, transparently, and with proper oversight. For most PMEs and ETIs, the compliance gap is not about technology — it's about awareness and process." — Toni Dos Santos, Co-Founder, Spicy Advisory
Sanctions: What's Really at Stake
The AI Act's penalty structure is designed to get attention:
- Prohibited AI practices: Up to €35 million or 7% of global annual turnover
- High-risk AI non-compliance: Up to €15 million or 3% of global annual turnover
- Providing incorrect information to authorities: Up to €7.5 million or 1% of global turnover
For SMEs and startups, the regulation provides for proportional fines — but "proportional" to a company with €50 million in revenue still means potentially millions of euros. The reputational damage of an enforcement action may be even more costly than the fine itself.
Practical Steps You Can Take This Week
You don't need a six-month project to start. Here are immediate actions:
- Designate an AI Act owner. Someone in your organization — whether it's your DPO, compliance officer, or a senior manager — needs to own this topic.
- Run a quick AI inventory. Send a simple survey to department heads: "What AI tools does your team use?" You'll likely be surprised by the answers.
- Check for prohibited uses. Cross-reference your inventory against the prohibited practices list. If you find any, stop immediately.
- Brief your executive team. Share this article or a similar summary with your leadership. Compliance starts with awareness.
- Assess your biggest HR AI risks. If you use AI in recruitment, performance reviews, or workforce planning, these are your highest-risk areas. Prioritize them for compliance review.
For companies that want to build a broader AI governance framework, the AI Act compliance process can serve as the foundation for a more comprehensive program that covers both regulatory requirements and operational best practices.
Don't wait for enforcement to start preparing. Spicy Advisory's AI Governance Training program helps PMEs and ETIs build AI Act compliance into their operations — practically, efficiently, and without legal jargon overload. Book a discovery call.
Frequently Asked Questions
Is my company affected by the EU AI Act?
Almost certainly yes, if you operate in the EU and use AI in any form. The AI Act applies not only to companies that develop AI systems but also to "deployers" — organizations that use AI systems in their professional activities. If your teams use AI-powered tools for recruitment, customer service, data analysis, content creation, or any other business function, you have obligations under the AI Act. Even using third-party SaaS products with embedded AI features counts. The scope is deliberately broad: if AI influences decisions that affect people, the regulation applies.
What are the penalties under the EU AI Act?
The AI Act establishes a three-tier penalty structure. Violations involving prohibited AI practices carry fines of up to €35 million or 7% of global annual turnover, whichever is higher. Non-compliance with high-risk AI obligations can result in fines of up to €15 million or 3% of global turnover. Providing incorrect or misleading information to regulatory authorities carries fines of up to €7.5 million or 1% of turnover. For SMEs and startups, fines are proportional but can still represent millions of euros. Beyond financial penalties, the reputational damage from a public enforcement action can significantly impact business relationships and market confidence.
When does the EU AI Act enter into force?
The AI Act entered into force on August 1, 2024, but its provisions apply in phases. Prohibitions on unacceptable-risk AI practices took effect on February 2, 2025 — these are already enforceable. Rules for General-Purpose AI models (like GPT-4 and Claude) apply from August 2, 2025. The main body of the regulation, including obligations for high-risk AI systems, becomes enforceable on August 2, 2026. Final provisions for AI systems embedded in EU-regulated products take effect August 2, 2027. For most companies, August 2026 is the key compliance deadline.
Do I need a DPO for AI Act compliance?
The AI Act does not specifically require appointing a Data Protection Officer (DPO) for AI compliance. However, if you already have a DPO under GDPR, they are a natural candidate to coordinate AI Act compliance given the significant overlap between data protection and AI regulation — particularly around fundamental rights impact assessments, transparency obligations, and data governance. For PMEs and ETIs without a DPO, designating an "AI compliance lead" is recommended. This person doesn't need to be a lawyer — they need to understand your AI systems, the regulatory framework, and have the authority to drive compliance processes across departments.