The Digital Omnibus of the AI Act has just rewritten the European AI timeline.
In the early hours of May 7, 2026, around 4:30 am, the EU Council and the European Parliament reached a provisional political agreement that delays high-risk AI obligations by sixteen months (source: Bird & Bird, press release May 8, 2026).
The agreement maintains several firm deadlines for French tech SMEs and digital agencies in 2026.
A quick read promises collective relief.
A detailed read reveals a three-phase timeline deciding who can breathe in 2026 and who must already work on AI Act compliance.
Here are the 7 changes enacted and the 6 to 8-week action plan to meet them.
- Annex III postponed from August 2, 2026, to December 2, 2027: 16 more months of runway for high-risk autonomous systems (recruitment, biometrics, credit, education).
- Article 50 transparency remains locked to August 2, 2026: chatbot, deepfake, AI-generated content, no delay, fines up to €7.5M.
- Watermarking accelerated to December 2, 2026: Parliament cut the grace period from 6 to 3 months for machine-readable marking of AI content.
- New ban on nudifying and CSAM by December 2, 2026: added to Article 5, applicable even to SMEs reselling a third-party tool.
- Small mid-cap (up to 750 employees and €150M turnover): new category with simplified documentation and proportionate sanction caps.
- Legal status as of May 12, 2026: political agreement, not yet law, final adoption aimed before August 2, 2026, or revert to the original timeline.
The night of May 7: what happened at 4:30 am for the AI Act
From the broken trilogue of April 28 to the agreement reached on May 7
On April 28, 2026, the previous trilogue (tripartite negotiation between the Council, Parliament, and Commission) stalled on the conformity assessment of Annex I products.
Nine days later, under pressure from the original timeline of August 2, 2026, negotiators returned to the table and sealed a provisional agreement around 4:30 am on May 7, 2026 (source: Bird & Bird, May 8, 2026).
The content enacts three major political choices: fixed dates instead of a conditional mechanism, new CSAM ban in Article 5, and tightening of watermarking by December 2, 2026.
The compromise follows the position adopted on March 26, 2026, by the European Parliament, with 569 votes in favor, 45 against, and 23 abstentions.
Exact legal status as of May 12, 2026
The agreement is political, not yet legal.
The text must be formally adopted by the Council and Parliament, then undergoes legal-linguistic revision before publication in the OJUE (Official Journal of the European Union).
The institutions aim for adoption before August 2, 2026; if the OJUE does not publish before this deadline, the original AI Act timeline automatically applies (source: benoitbellaiche.substack.com).
Any French tech SME freezing its high-risk audit based on the delay is taking a calendar gamble.
The Cypriot presidency of the Council holds the lever until June 30, 2026, and is pushing for rapid adoption.
For broader European dynamics behind this vote, read our analysis of the 22 proposals from Mistral for European AI sovereignty.
The agreement remains subject to formal adoption in the coming weeks.
As it stands, the new application dates (December 2, 2027, for Annex III and August 2, 2028, for Annex I) can serve as a planning basis, pending adoption before August 2, 2026 (Bird & Bird).
Annex III and Annex I: sixteen months of reprieve for high-risk AI Act
The first block of the agreement is arithmetical and clear.
The obligations for high-risk systems in Annex III (eight listed domains: biometrics, critical infrastructures, education, employment and HR, essential services like credit or insurance, law enforcement, border control, justice) move from August 2, 2026, to December 2, 2027.
This means 16 additional months for CE audits, technical documentation, bias tests, and documented human oversight (source: Ethicore Tracker, May 2026).
The systems in Annex I (AI embedded in already regulated products: medical devices, machinery, toys, vehicles) move from August 2, 2027, to August 2, 2028.
This means 12 additional months to align CE marking and AI conformity assessment.
National sandboxes (regulatory sandboxes, test environments supervised by an oversight authority) move from August 2, 2026, to August 2, 2027, with the creation of a central EU sandbox managed by the AI Office (source: OneTrust, December 1, 2025).
The initial conditional mechanism has been replaced by non-negotiable fixed dates (source: aiacto.eu, March 18, 2026).
The practical effect for an 80-employee SaaS HR publisher offering a CV pre-screening module with AI scoring: it remains typically Annex III, gains 16 months, but must already document because the GDPR Article 22 audit crossed with Articles 9 to 15 of the AI Act cannot be completed in 8 weeks.
The trap of “everything is postponed”: what remains locked to August 2, 2026
This is the most misunderstood point in the French and Anglo-Saxon press.
Three blocks of obligations do not move and remain applicable on the scheduled dates.
Article 50 transparency: firm on August 2, 2026
Article 50 imposes four distinct transparency obligations: chatbot disclosure, machine-readable marking of AI-generated content, visible labeling of deepfakes, reporting of AI texts of public interest (source: artificialintelligenceact.eu).
The May 7, 2026 agreement does not affect this date.
A 30-employee WordPress agency in Lyon installing a Claude chatbot for its e-commerce clients must display “you are chatting with an AI” by August 2, 2026 at the latest.
Failure to comply exposes to a maximum fine of €7.5M or 1% of global turnover (source: nerolia-formation.fr, May 3, 2026).
The ARCOM and DGCCRF share control of Article 50 in France (source: vie-publique.fr, Article 77 of the regulation).
GPAI Articles 53-55 and historical Article 5: already active
Obligations on GPAI (general-purpose AI models: GPT-4, Claude, Gemini, Mistral, Llama) have been in effect since August 2, 2025: technical documentation, copyright policy, training data summary, and reporting of serious incidents remain due.
Article 5, which bans unacceptable practices (social scoring, subliminal manipulation, exploitation of vulnerabilities, real-time remote biometric identification for law enforcement), has been in effect since February 2, 2025.
A violation of Article 5 remains exposed to €35M or 7% of global turnover.
If your marketing team generates synthetic creative, runs an AI chatbot, or produces AI content for European consumers, the transparency compliance clock now runs until December 2, 2026 (Ethicore, May 2026).
What the Digital Omnibus adds or accelerates in the AI Act
Three additions change the daily life of SMEs, regardless of the high-risk delay.
First addition: a new Article 5 ban targets AI tools that generate child sexual abuse material (CSAM) or non-consensual intimate imagery (nudifying).
The ban covers market placement, marketing without technical safeguards, and usage by deployers; companies have until December 2, 2026 to align their systems, including third-party tool resellers (source: Bird & Bird, May 8, 2026).
Second addition: Article 50(2) watermarking applies on August 2, 2026 for new systems but benefits from a transitional period until December 2, 2026 for generative AI systems already on the market.
This is tighter than the initially proposed February 2, 2027, by the Commission, with Parliament cutting the grace period from 6 to 3 months (source: Ethicore Tracker, May 2026).
A freelance prompt engineer publishing a newsletter 80% generated by GPT-4 must aim for machine-readable marking by December 2, 2026 at the latest.
Third addition: the centralization of GPAI supervision in the AI Office, the European authority created by the regulation.
For SMEs consuming GPAI models without supplying them, it’s an indirect signal: upstream providers (OpenAI, Anthropic, Google, Mistral) will be audited in Brussels, clarifying B2B contractual responsibilities.
Small mid-cap: the AI Act arbitration zone no one is watching
The agreement creates a new category of companies absent from the initial AI Act version.
The small mid-cap includes companies with fewer than 750 employees and less than €150M annual turnover or €129M balance sheet (source: OneTrust).
Three reliefs are enacted: simplified technical documentation on high-risk systems, priority access to national sandboxes and the central EU sandbox, and proportionate sanction caps (the lesser of a fixed amount or percentage of turnover).
The scope covers about 38,000 companies in the EU, including several thousand in France (source: European Commission, IP_26_1024).
| Category | Thresholds | Reliefs | Sanction cap |
|---|---|---|---|
| SME | < 250 employees and < €50M turnover | Simplified documentation, free sandbox | The lesser of €15M or 3% of turnover |
| Small mid-cap | < 750 employees and < €150M turnover | Simplified documentation, priority sandbox | The lesser of €15M or 3% of turnover |
| Large company | ≥ 750 employees or ≥ €150M turnover | None | €15M or 3% of turnover (whichever is higher) |
For a growing agency between 250 and 750 employees, the structuring issue becomes clear: crossing the SME threshold no longer shifts to “large company” status, the small mid-cap relief holds up to 750 employees.
6 to 8-week action plan for a French tech SME facing the AI Act
The May 7, 2026 agreement rewrites the timeline without closing projects, here is a schedule from mid-May to mid-July 2026.
Weeks 1-2: inventory and classification of AI systems
List all affected AI components: client chatbots, internal tools, embedded third-party SaaS modules, generated marketing content.
For each component, assign a label: Article 5 (banned), high-risk Annex III (HR, scoring, biometrics), Article 50 (chatbot, generated content), GPAI user (consumption of GPT-4, Claude, Mistral).
Document supplier, version, purpose, usage volume, and cross-referenced GDPR status.
Weeks 3-4: compliance with Article 50 (the urgency of August 2, 2026)
On each chatbot: add a persistent IA mention, visible on the first message.
On each published AI marketing content: prepare a documented human editorial process activating the Article 50(4) exception, or failing that, explicit labeling “AI-generated content”.
On commercial deepfakes: mandatory visible labeling without exception.
All personal data processed falls under CNIL under cross-referenced GDPR with Article 50 (source: leto.legal).
A recent technical option from suppliers is analyzed in our dossier on the local OpenAI model compatible with GDPR.
Weeks 5-6: preparing for watermarking by December 2, 2026
Map the flows of AI-generated content in production: images, videos, audio, automated text.
Discuss with suppliers (OpenAI, Anthropic, Google, Mistral) the status of watermarking and provenance metadata.
Test the integration of C2PA tags or invisible watermarks in the editorial pipeline before the final version of the Code of Practice expected in June 2026.
Weeks 7-8: high-risk, cross-referenced GDPR audit, and designation of responsible person
On Annex III components, start technical documentation: risk management, data quality, human control, logging.
Cross-reference with GDPR Article 22 (automated decisions) and DPIA if sensitive data.
Appoint an AI Act responsible person internally or externally (often the expanded DPO).
Identify the contact authority according to the purpose: CNIL for biometrics and HR, DGCCRF as the single contact point (Article 70.2), ARCOM for media content, ACPR if credit scoring, HAS/ANSM if medical device.
The most costly mistake in May 2026 is waiting until December 2, 2027, to address compliance, while August 2, 2026, and December 2, 2026, are the real next red lines for AI transparency and watermarking.
Conclusion: AI Act compliance is not postponed, it is prepared
The Digital Omnibus of May 7, 2026, offers 16 months of runway on high-risk but changes nothing about the three red lines structuring SME AI compliance in 2026.
The real next dates remain August 2, 2026 for transparency, December 2, 2026 for watermarking and CSAM ban, with ongoing vigilance on Article 5 and GPAI already active.
The small mid-cap opens a strategic arbitration window for agencies growing between 250 and 750 employees.
The agreement remains legally provisional as of May 12, 2026, making monitoring publication in the OJUE crucial in the coming weeks.
For more on concrete obligations by company size, read our dossier on AI Act compliance for French companies.
FAQ: 10 questions about the Digital Omnibus and the AI Act
Is the May 7, 2026 agreement already law?
No, it’s a provisional political agreement that must be formally adopted by the Council and Parliament, then published in the OJUE before August 2, 2026, or revert to the original timeline.
Do my AI systems benefit from the delay to December 2027?
Only if they fall under autonomous Annex III (recruitment, biometrics, credit, education, justice, critical infrastructures, border control, democratic processes); Annex I moves to August 2, 2028.
What remains applicable on August 2, 2026, despite the agreement?
Article 50 transparency (chatbot, deepfake, AI content) remains applicable on this date, GPAI Articles 53 to 55 have been active since August 2, 2025, and Article 5 bans since February 2, 2025.
When must my chatbot display “you are talking to an AI”?
By August 2, 2026, at the latest, with a maximum fine of €7.5M or 1% of global turnover for transparency failures.
From what date should I mark AI-generated content I publish?
Article 50(2) watermarking applies on August 2, 2026, for new systems and on December 2, 2026, for generative AI systems already on the market (transitional period shortened by Parliament to 3 months).
Is my 80-employee company an SME or small mid-cap?
With 80 employees and turnover under €50M, you are an SME in the European sense, with simplified documentation, free sandbox, and proportionate sanction cap.
Which French authorities can sanction me?
The DGCCRF is the single contact point (Article 70.2), CNIL handles biometrics and HR, ARCOM handles media content, ACPR handles financial scoring, HAS and ANSM handle medical devices.
What should I do in the next 8 weeks?
Weeks 1-2 inventory and classification, weeks 3-4 Article 50 compliance, weeks 5-6 watermarking, weeks 7-8 high-risk documentation and AI Act responsible designation (schedule mid-May to mid-July 2026).
Is AI literacy in Article 4 still mandatory?
Yes, Article 4 requires training for employees using AI tools in a professional setting, and the May 7 compromise maintains the obligation despite trilogue debates.
What happens if the OJUE does not publish before August 2, 2026?
The original AI Act timeline automatically applies with the entry into force of Annex III obligations from that date, which is the calendar risk under-covered in the French press.
Related Articles
100,000 public agents switch to Mistral: what it reveals about the real French AI market
On May 4, 2026, Caisse des Dépôts signed a framework agreement with Mistral AI to equip up to 100,000 public agents with generative artificial intelligence. The Mistral Caisse des Dépôts…
Workspace Agents OpenAI on credit from May 6, 2026: what’s changing for teams
Since May 6, 2026, Workspace Agents OpenAI are no longer free. The open test phase that began on April 21 closed at midnight Pacific time. Every action triggered by an…