Your employees are already using AI with company data — the question is whether they're doing it legally. A 2025 study by CNIL found that 56% of French workers use generative AI tools at work, yet only 10% of organizations have a formal AI usage policy. That gap between usage and governance is not just a compliance risk — it's a ticking time bomb. With GDPR fines averaging €2.1 million per enforcement action in France and the CNIL increasingly focused on AI, the cost of ignorance has never been higher.
By Toni Dos Santos, Co-Founder, Spicy Advisory
The Core Tension: AI Wants Data, GDPR Protects It
Generative AI systems are fundamentally data-hungry. They improve with more context, more examples, more information. GDPR, on the other hand, exists to minimize data processing, ensure purpose limitation, and guarantee individual rights over personal data. These two forces are in direct tension — and your employees navigate this tension every single day, often without guidance.
The problem is not that AI and GDPR are incompatible. They're not. The problem is that most companies have never defined the boundaries. When a salesperson pastes a client's contact details into ChatGPT to draft a follow-up email, they've just transferred personal data to a US-based processor without a legal basis, a Data Processing Agreement, or the data subject's knowledge. That's a GDPR violation — and it happens thousands of times daily across French businesses.
What the CNIL Says About AI in the Workplace
The Commission Nationale de l'Informatique et des Libertés has been remarkably proactive on AI regulation. In 2024 and 2025, the CNIL published a series of guidelines specifically addressing generative AI and data protection. Here are the key principles every French company needs to understand:
Legal Basis Is Non-Negotiable
Any use of AI that processes personal data requires a valid legal basis under Article 6 of GDPR. The CNIL has clarified that legitimate interest can serve as a legal basis for certain AI applications, but it requires a documented balancing test. Consent may be needed when AI processes sensitive personal data or when processing goes beyond what data subjects would reasonably expect.
Transparency to Individuals
When AI systems process personal data — whether employee data or customer data — the individuals concerned must be informed. This means your privacy notices need updating if you've deployed AI tools that process personal data. The CNIL issued 147 formal notices in 2023, many related to transparency failures. AI makes this obligation harder to meet because data flows are often opaque.
Data Minimization Applies to Prompts
The principle of data minimization doesn't stop at your databases. It extends to what your employees type into AI tools. If a task can be accomplished without including personal data in a prompt, the personal data should not be included. This is a cultural shift that requires training, not just policy.
Article 22: Automated Decision-Making
GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. If your company uses AI to make or significantly inform decisions about hiring, firing, credit, insurance, or service access, you must ensure meaningful human involvement in the decision — and provide affected individuals with the right to contest the decision and obtain human review.
La Matrice RGPD-IA Spicy: Classifying Every AI Use Case
At Spicy Advisory, we developed a practical tool to help teams instantly assess the GDPR risk of any AI use case. The Spicy GDPR-AI Matrix uses two axes — whether personal data is involved and whether the AI tool is external (cloud-based, non-EU) or internal/EU-hosted — to create four zones.
Green Zone: No Personal Data + Internal/EU-Hosted AI
Examples: Summarizing public market research with an EU-hosted AI tool. Brainstorming marketing concepts. Generating code suggestions. Analyzing anonymized operational data.
Risk level: Low. Standard AI usage policies apply. No specific GDPR measures required beyond your general data governance.
Amber Zone A: No Personal Data + External AI (US/Non-EU)
Examples: Using ChatGPT to draft internal communications without personal data. Generating presentation outlines with Claude. Creating social media content ideas with Gemini.
Risk level: Moderate. While no personal data is at risk, confidential business information may be shared with non-EU processors. Ensure your enterprise license prevents training on your data. Review your provider's data processing terms. Be aware that some tools retain prompts for improvement purposes unless enterprise settings are configured.
Amber Zone B: Personal Data + Internal/EU-Hosted AI
Examples: Using an EU-hosted AI tool to analyze employee satisfaction surveys containing names. Processing customer feedback data with identifiable information through an on-premise AI system. Running HR analytics on internal AI infrastructure.
Risk level: Moderate to High. A legal basis under GDPR is required. Data minimization must be applied — anonymize or pseudonymize where possible. Privacy notices must be updated. For sensitive data (health, union membership, ethnicity), additional safeguards apply. Consider whether a Data Protection Impact Assessment (DPIA) is needed.
Red Zone: Personal Data + External AI (US/Non-EU)
Examples: Pasting client contact details into ChatGPT. Uploading employee performance reviews to an external AI tool. Using a US-hosted AI service to process customer health data. Feeding CVs into a non-EU AI screening tool.
Risk level: High to Critical. This is where most GDPR violations with AI occur. You need: a valid legal basis, a Data Processing Agreement with the AI provider, an adequate transfer mechanism for international data transfers (Standard Contractual Clauses at minimum), updated privacy notices, potentially a DPIA, and the ability to respond to data subject rights requests. In many cases, the safest approach is simply not to do this until proper safeguards are in place.
Practical Scenarios: What's OK and What's Not
Theory is useful, but your teams need concrete guidance. Here are common scenarios French companies face:
Scenario 1: Summarizing Meeting Notes with AI
OK if: You remove participant names and any personal data before pasting into the AI tool, or you use an enterprise-licensed EU-hosted tool with a proper DPA. The summary focuses on decisions and action items, not individual statements attributable to specific people.
Not OK if: You paste raw meeting notes including names, personal opinions, performance discussions, or salary information into a consumer-grade AI tool.
Scenario 2: Using AI to Draft Client Communications
OK if: You provide the AI with the type of client and communication context without including actual client personal data. "Draft a follow-up email for a mid-market CFO who expressed interest in our analytics platform" is fine.
Not OK if: You paste the client's name, email, company details, and conversation history into the prompt. That's transferring personal data without proper safeguards.
Scenario 3: AI-Assisted CV Screening
Possible if: You use a properly vetted, GDPR-compliant recruitment AI tool with a DPA, inform candidates that AI is used in the screening process, ensure meaningful human oversight of all decisions, provide candidates the ability to contest AI-influenced decisions, and conduct a DPIA. This is classified as high-risk under both GDPR and the EU AI Act.
Not OK if: You upload CVs to ChatGPT or a non-compliant tool and ask it to rank candidates. This violates multiple GDPR principles and exposes you to significant liability.
Scenario 4: Employee Monitoring with AI
Highly restricted. French labor law (Code du travail) and CNIL guidelines impose strict limits on employee monitoring. AI-powered surveillance of employee emails, keystrokes, screen activity, or productivity metrics requires: consultation with employee representatives (CSE), proportionality assessment, prior information to employees, and a valid legal basis. The CNIL has repeatedly sanctioned companies for excessive employee monitoring. Adding AI to surveillance doesn't make it more legal — it makes it more risky.
Scenario 5: AI for Customer Service Chatbots
OK if: The chatbot clearly identifies itself as AI (transparency obligation), personal data collected is limited to what's necessary, data is processed within the EU or with adequate transfer safeguards, customers can request human intervention, and the privacy policy covers AI-powered interactions.
Not OK if: The chatbot pretends to be human, collects excessive personal data, or makes automated decisions about customer service levels without human oversight.
The DPIA Question: When Do You Need One?
A Data Protection Impact Assessment (AIPD in French — Analyse d'Impact relative à la Protection des Données) is mandatory when AI processing is likely to result in high risk to individuals. The CNIL has published a list of processing types that require a DPIA, and several are directly relevant to AI:
- Large-scale profiling with significant effects on individuals
- Automated decision-making with legal or significant effects
- Systematic monitoring of employees
- Processing of sensitive data at scale
- Innovative use of new technologies (which includes many AI applications)
If your AI use case falls into any of these categories, a DPIA is not optional. According to a 2025 survey by the Association Française des DPO, only 23% of French companies have conducted a DPIA for their AI tools — meaning the vast majority are non-compliant on this specific requirement.
Data Residency: The US vs. EU Server Question
One of the most common questions we hear: "Does it matter where the AI servers are?" The answer is an emphatic yes.
Since the Schrems II ruling and despite the EU-US Data Privacy Framework adopted in July 2023, transferring personal data to US-based AI services requires careful legal analysis. The Data Privacy Framework provides a valid transfer mechanism, but only for companies that are certified under it. You must verify that your specific AI provider is certified — and even then, additional safeguards may be advisable for sensitive data categories.
For French companies, the practical implications are clear:
- Enterprise versions of AI tools (ChatGPT Enterprise, Microsoft Copilot with EU data residency, Google Gemini for Workspace) generally offer better data protection guarantees than consumer versions
- EU-hosted alternatives (Mistral AI, Aleph Alpha, EU-region Azure OpenAI) reduce transfer risk significantly
- On-premise or private cloud deployments eliminate the transfer question entirely but require more technical investment
The CNIL has signaled that it will scrutinize AI-related international data transfers closely. In 2023 alone, the CNIL imposed €89 million in total fines, with several cases involving international data transfer violations. Don't assume your AI provider has handled GDPR compliance for you — verify it.
Building a GDPR-Compliant AI Policy: A Practical Checklist
Every French company using AI needs a written AI usage policy that addresses GDPR. Here's what it should cover:
- Approved AI tools: List the tools your company has vetted for GDPR compliance, specifying which have enterprise licenses and DPAs
- Data classification for AI: Define what types of data can be used with which AI tools, using the GDPR-AI Matrix zones
- Personal data prohibition for external tools: Unless specific safeguards are in place, prohibit entering personal data into external AI tools
- Prompt hygiene guidelines: Teach employees to anonymize, pseudonymize, and minimize data in prompts
- Output review requirements: AI-generated content containing or influenced by personal data must be reviewed before use
- Incident reporting: Clear procedures for reporting accidental personal data exposure through AI tools — this may constitute a data breach requiring notification under Article 33
- Training requirements: Ongoing education for all employees on GDPR-AI intersection, not just a one-time read-and-sign
For organizations looking to build comprehensive AI governance frameworks, the GDPR-compliant AI policy should be a cornerstone document, alongside your AI Act compliance roadmap.
"The companies that will thrive in the AI era are not the ones that use AI the most aggressively. They're the ones that use it the most intelligently — which means understanding exactly where the legal boundaries are and operating confidently within them." — Toni Dos Santos, Co-Founder, Spicy Advisory
The Cost of Getting It Wrong
The financial exposure is real and growing:
- Maximum GDPR fines: €20 million or 4% of global annual turnover
- Average CNIL fine in major cases: €2.1 million (2023 data)
- CNIL complaints received in 2023: Over 16,000 — a 35% increase from 2021
- Employee litigation risk: French labor courts (Conseils de Prud'hommes) increasingly consider AI-related privacy violations in wrongful termination and discrimination cases
Beyond fines, the reputational impact of a CNIL enforcement action is severe. French consumers and business partners are among the most privacy-conscious in Europe, and a public sanction for AI-related GDPR violations can damage trust that took years to build.
Need help navigating the AI-GDPR intersection? Spicy Advisory's AI Governance Training equips your DPO, HR teams, and managers with practical frameworks for GDPR-compliant AI adoption — including the GDPR-AI Matrix, policy templates, and scenario-based training. Book a discovery call.
Frequently Asked Questions
Can we use ChatGPT with client data?
Not with the consumer version, and only with significant safeguards using the enterprise version. Entering client personal data (names, contact details, financial information, health data) into ChatGPT's consumer version violates GDPR on multiple grounds: no Data Processing Agreement, potential international data transfer without adequate safeguards, and likely violation of data minimization and purpose limitation principles. ChatGPT Enterprise offers better guarantees — data is not used for training, and a DPA is available — but you still need a valid legal basis, updated privacy notices, and should verify the provider's EU-US Data Privacy Framework certification. The safest approach is to anonymize all client data before using any AI tool.
Do we need a DPIA to use AI?
It depends on the use case, but probably yes for many AI applications. A Data Protection Impact Assessment (AIPD) is mandatory under GDPR when processing is likely to result in high risk to individuals' rights and freedoms. The CNIL's published criteria include large-scale profiling, automated decision-making with significant effects, systematic monitoring, processing of sensitive data, and innovative use of new technologies. Most enterprise AI use cases involving personal data — HR analytics, customer profiling, AI-assisted decision-making — will trigger at least one of these criteria. Only 23% of French companies have conducted DPIAs for their AI tools, meaning most are technically non-compliant. If in doubt, conduct the DPIA — the process itself helps identify and mitigate risks.
What does the CNIL say about AI in business?
The CNIL has been one of Europe's most active regulators on AI and data protection. In 2024-2025, it published comprehensive guidelines addressing generative AI, including guidance on legal bases for AI training data, transparency obligations, data subject rights in AI contexts, and AI-specific DPIA requirements. Key positions include: legitimate interest can serve as a legal basis for certain AI processing but requires a documented balancing test; data minimization applies to AI prompts; individuals must be informed when AI processes their personal data; and Article 22 protections apply to AI-driven automated decisions. The CNIL also created a dedicated AI team to handle complaints and enforce compliance in this area. French companies should treat CNIL AI guidelines as de facto requirements, not suggestions.
How do you write a GDPR-compliant AI policy?
A GDPR-compliant AI policy should cover seven key areas: (1) a list of approved AI tools with verified DPAs and GDPR compliance status; (2) data classification rules specifying what data types can be used with which tools; (3) a clear prohibition on entering personal data into non-approved or consumer-grade AI tools; (4) prompt hygiene guidelines teaching employees to anonymize and minimize data; (5) output review requirements for AI-generated content involving personal data; (6) incident reporting procedures aligned with GDPR's 72-hour breach notification requirement; and (7) mandatory ongoing training. The policy should be practical, with clear examples and scenarios, not a legal document nobody reads. It should be reviewed quarterly as AI tools and regulatory guidance evolve. Use the Spicy GDPR-AI Matrix to structure the data classification section — it gives employees an instant visual reference for acceptable use.