AI Act and CNIL: How AI Regulation Is Changing France in 2026

Back to blog
Artificial Intelligence
Nicolas
9 min read
ai-act-cnil-regulation-ia-france

Artificial intelligence is weaving its way into our daily lives, often without us even noticing. Your bank’s chatbot, your email’s spam filter, the algorithm recommending shows on Netflix—all are AI systems you interact with every day. With the gradual implementation of the European AI Act, these technologies will now fall under the watchful eye of a familiar French institution: the CNIL. But what does this actually change for you? And what about for companies developing these tools?

Here’s a breakdown of a regulatory transformation that’s set to reshape the landscape of AI regulation in France.

The AI Act: A Regulation That Classifies AIs by Risk Level

The AI Act takes a straightforward yet effective approach: the greater the potential harm caused by an AI, the stricter the rules. The European regulation distinguishes four categories of risk, each with its own set of requirements.

Prohibited AIs: The Red Line

At the top of the pyramid, unacceptable risk systems are outright banned from the European market.

This includes Chinese-style social scoring, emotion recognition in workplaces or schools, and subliminal manipulation of behavior.

These bans have already been in effect since February 2025.

High-Risk AIs: Under Tight Supervision

High-risk AIs cover sensitive areas: biometrics, critical infrastructure, education, employment, and access to credit.

These tools face stringent obligations. Developers must document their algorithms, ensure the quality of training data, implement human oversight, and guarantee technical robustness and cybersecurity.

They’ll also be required to register their systems in a European database accessible to the public.

Limited-Risk AIs: Mandatory Transparency

Your favorite chatbot falls into this category. The main obligation? Clearly informing you that you’re interacting with a machine.

Deepfakes must also be labeled as such. There’s no excessive paperwork, but transparency is a non-negotiable requirement that CNIL will enforce.

Minimal-Risk AIs: Free Rein

Spam filters, video game AIs, automatic spellcheckers… These systems are exempt from any specific obligations.

The EU considers these pose no significant risk to fundamental rights.

Timeline to remember: The bans have been effective since February 2025. Obligations for high-risk AIs will come into force between August 2026 and August 2027, depending on the category.

So companies still have a bit more time to bring themselves into compliance.

CNIL Becomes France’s AI Watchdog

Choosing CNIL as the national supervisory authority for the AI Act is no coincidence. The institution already has deep expertise in personal data protection—and AI processes vast amounts of such data. It also has hands-on experience through GDPR enforcement and enjoys public trust and legitimacy.

Powerful Sanctioning Authority

CNIL has been granted substantial sanctioning powers. For violations involving banned practices, fines can reach up to €35 million or 7% of the company’s global turnover. For other breaches, the maximum is €15 million or 3% of global turnover.

These amounts are designed to get even tech giants’ attention.

An Equally Important Role: Guidance and Support

Beyond penalties, CNIL will guide companies in reaching compliance. It will publish recommendations, issue guidelines, and organize consultations.

Its mission of education will be crucial to ensure the French AI ecosystem isn’t paralyzed by regulatory complexity.

CNIL will also be tasked with promoting AI literacy: the public’s understanding of AI systems so everyone knows what they’re using and can exercise their rights wisely.

If you want to explore how AI governance could enhance our democracy, CNIL’s new role is perfectly aligned with this logic of citizen control over algorithms.

What Changes for You in Daily Life

Now for the practical implications: how will your day-to-day use of AI change with the new regulations?

Greater Clarity in What You Use

No more chatbots pretending to be human. When interacting with an AI, you must be informed. This transparency requirement also applies to generated content: any image created by AI must be identifiable as such.

Political or advertising deepfakes will need to display a clear warning.

Remedies for Discrimination

If a credit scoring algorithm denies you a loan based on discriminatory grounds, you can file a complaint with CNIL. The same applies if an automated CV-sorting tool rejects you for illegal reasons.

The CNIL’s AI role includes protecting you against algorithmic bias that could affect your professional, financial, or personal life.

Safeguards for Sensitive Uses

Real-time biometric recognition in public spaces will be almost entirely banned (with rare exceptions for anti-terrorism purposes). Your employer won’t be able to use AI to analyze your emotions. These safeguards directly protect your privacy and dignity.

New Obligations for Businesses

For organizations developing or deploying AI systems, the task ahead is significant. CNIL will be their primary contact in France.

Classifying Your AI: A Crucial Step

Every company will need to determine which risk category applies to their AI system. Misclassifying can be costly.

CNIL will have the authority to check and challenge these assessments if it believes a company is downplaying the risks of its product.

Document, Test, Monitor

For high-risk AIs, the obligations are extensive: detailed technical documentation, robustness tests, bias assessment in training data, implementing human oversight, and ongoing monitoring post-deployment…

The European regulator has set a comprehensive framework that companies must follow—or face sanctions.

The GPAI Exception: Large Models Under Stricter Supervision

General-purpose AI models (GPAI) like GPT-4 or Claude face specific obligations. Once they exceed a certain computing power threshold (10^25 FLOPS), they are presumed to present a systemic risk.

The European Commission directly oversees these cases, but CNIL will handle deployments within France.

Practical tip: Startups and SMEs haven’t been forgotten. The AI Act includes regulatory sandboxes where smaller companies can test innovations under lighter supervision before market launch.

The Challenges Ahead for CNIL

Becoming France’s AI regulator is a major challenge for an institution already busy enforcing the GDPR. Several questions remain unanswered.

The Resources Question

Auditing thousands of AI systems, some highly complex, will require advanced expertise.

CNIL will need to recruit data scientists, machine learning specialists, and cybersecurity experts. Will its budget and staff be sufficient to meet this ambition?

Opacity in Models

How do you assess the bias of a neural network comprising billions of parameters? “Black box” models pose a fundamental auditability challenge.

CNIL will need to develop or adopt assessment methodologies adapted to these opaque systems.

European Coordination

The AI Act creates a European AI Office within the European Commission. CNIL must coordinate its work with this supranational body, as well as with sectoral authorities (AMF for finance, ANSSI for cybersecurity, etc.). Such multi-layered governance may create some gray areas.

To further explore the different visions shaping the sector, discover how Altman, Musk, Amodei, Zuckerberg, and LeCun envision the future of AI. These debates directly influence Europe’s regulatory approach.

Toward European Digital Sovereignty?

Behind the AI Act lies a geopolitical challenge. In the face of American and Chinese dominance in AI technologies, Europe is trying to play the ethical regulation card.

The goal: to create a global standard built on respect for fundamental rights—much like how the GDPR set the bar for data privacy.

Thus, CNIL becomes a key player in shaping digital sovereignty. By imposing strict rules on American tech giants wanting to operate in France, it may help level the playing field for European actors, who currently face a disadvantage.

The European gamble: turn regulatory constraints into a competitive edge. Companies that comply with the AI Act can showcase a trustmark to consumers increasingly attuned to ethical concerns.

Whether this strategy will pay off, or end up slowing European innovation compared to less-restricted competitors, remains to be seen. The debate is far from settled.

FAQ

What exactly is the AI Act?

The AI Act is the first European regulation specifically targeting artificial intelligence systems. Adopted in 2024, it classifies AIs by risk level and imposes proportionate obligations on developers and professional users.

Why was CNIL chosen as the competent authority in France?

CNIL brings recognized expertise in data protection and digital technology oversight. Its experience with the GDPR naturally positions it to supervise AI systems handling massive volumes of personal data.

When does the AI Act take effect?

Implementation is gradual. Prohibited practices have been banned since February 2025. Obligations for high-risk AIs will apply between August 2026 and August 2027, depending on the category.

Is my company’s chatbot affected by the AI Act?

Yes, chatbots fall under the “limited risk” category. The main obligation is to inform users that they’re interacting with an AI. If your chatbot handles significant decisions (credit, employment), it may be classified as “high risk,” requiring stricter compliance.

What sanctions are provided for by the AI Act?

Fines can reach up to €35 million or 7% of global turnover for the most severe violations. For other breaches, the maximum is €15 million or 3% of global turnover.

How do I know if my AI is “high risk”?

Annex III of the regulation lists the relevant areas: biometrics, critical infrastructure, education, employment, essential public services, migration, justice. If your AI operates in any of these sectors, it is probably high risk.

Can individuals file a complaint with CNIL about an AI?

Yes. If you believe an AI system has infringed your rights (discrimination, lack of transparency, bias), you can approach CNIL, which will review your case and may launch an investigation.

Will the AI Act put French startups at a disadvantage?

The regulation includes relief measures for SMEs and startups, such as regulatory sandboxes. The aim is to foster innovation while also safeguarding fundamental rights.

Are ChatGPT and Claude subject to the AI Act?

Yes. These general-purpose AI models (GPAI) are subject to specific requirements. Beyond a certain computing power threshold, they are deemed to present systemic risk and must undergo enhanced supervision.

Does the AI Act apply to AIs developed outside Europe?

Yes—the regulation applies to any AI system placed on the EU market or whose outputs are used within the EU, regardless of the developer’s location. That means American tech giants are fully affected.

Related Articles

Ready to scale your business?

Anthem Creation supports you in your AI transformation

Disponibilité : 2 nouveaux projets pour Février/Mars
Book a discovery call