On May 4, 2026, Caisse des Dépôts signed a framework agreement with Mistral AI to equip up to 100,000 public agents with generative artificial intelligence.
The Mistral Caisse des Dépôts deal is valued at up to 140 million euros excluding taxes over four years and covers 19 entities within the group.
Beneath the sovereign veneer, it’s a precise industrial signal: Mistral becomes the preferred supplier for a major public client, with Sopra Steria and Computacenter handling the integration.
It’s crucial to assess what’s at stake beyond the numbers: Mistral Medium 3.5’s ability to compete with Claude Opus 4.5 and GPT-5.5, the real cost for an SME looking to replicate the pattern, and the hidden American dependency in the sovereign cloud.
In short
- Read the CDC deal beyond the storytelling: single-award framework agreement, max 4 years, 140 million euros cap, 19 subsidiaries, 40,000 licenses at launch, 100,000 target users.
- Evaluate Mistral Medium 3.5 on the right benchmark: 128B dense, 256k context, SWE-Bench Verified 77.6%, two points below Claude Sonnet 4.6, half the price per million tokens.
- Understand the Le Chat tiers: Pro (14.99 €/month), Team (24.99 €/seat/month), Enterprise with three modes (Cloud, dedicated VPC, Self-Hosted).
- Connect internal tools via MCP: Le Chat Work offers 20+ native connectors and allows custom MCP servers since April 2026.
- Align GDPR with Mistral DPA: EU hosting, Module 4 SCC, SecNumCloud on the CDC side, opt-out training from Le Chat Pro.
- Budget the real TCO for 25 seats: license 375 to 625 € per month, year 1 total three to four times this amount once SSO, DPA, training, and shadow AI governance are integrated.
The Caisse des Dépôts × Mistral deal, beyond sovereign storytelling
Caisse des Dépôts didn’t sign a communication partnership.
It awarded a public contract through a tender, with a pooled purchasing consortium among 19 group subsidiaries.
The framework agreement, signed on May 4, 2026, runs for two years, renewable up to a maximum of four years.
The cumulative cap reaches 140 million euros excluding taxes, according to the Sopra Steria statement on February 26, 2026.
Scope and timeline: 19 entities, 140 million euros excluding taxes, 40,000 then 100,000 seats
The scope covers two lots.
Lot 1 covers generative AI solutions: large language models (LLM, models trained on massive text data), conversational agents, business assistants, fine-tuning.
Lot 2 focuses on GPU capabilities (Graphics Processing Unit, graphics processors for running AI models) and the associated SecNumCloud infrastructure.
The deployment starts with 40,000 licenses and aims for 100,000 users over the contract duration.
The 19 involved entities include Bpifrance, La Banque Postale, La Poste, CNP Assurances, Docaposte, Geopost, Icade, and SFIL.
The useful analogy: a pooled railway tender, firm cap at 140 million euros, single supplier.
Sopra Steria and Computacenter, exclusive integrators
The agreement is single-award.
The duo Sopra Steria + Computacenter handles integration for the 19 entities.
This is a point that French coverage has barely commented on: the integrator concentration remains an industrial vulnerability.
Single-award doesn’t mean a de facto monopoly.
It means that for four years, two IT service companies hold the value chain between Mistral and public clients.
If one falters, the entire market wobbles.
Sopra Steria manages consulting and fine-tuning.
Computacenter provides the SecNumCloud infrastructure (ANSSI qualification, the French cybersecurity agency).
The three needs identified by CDC: generalist AI assistant for employees, information system design and development, AI studio for specific use cases.
Mistral Medium 3.5: what the model concretely brings
Mistral released Mistral Medium 3.5 on April 29, 2026, five days before the CDC announcement.
The timing is no coincidence: this model now powers Le Chat and Vibe, and it’s the one behind the 100,000 public seats.
128B dense, 256k context, reasoning_effort on demand
Three technical points.
One, 128 billion parameters in dense architecture, all activated per token (not a Mixture-of-Experts that only lights up part).
Two, 256,000 token context window, enough to swallow a complete SME code and its documentation in a single prompt.
Three, a reasoning_effort parameter configurable per request, modulating between quick response and long multi-step reasoning.
The model is multimodal (text plus image input), published under a Modified MIT license.
Self-hosting requires 4 H100 GPUs in FP8 quantization, about 70 GB of VRAM.
API pricing: $1.50 per million input tokens, $7.50 per million output tokens.
Product side: Le Chat Pro at 14.99 € per month, Le Chat Team at 24.99 € per seat, with Work mode and Vibe CLI access included.
SWE-Bench Verified 77.6%, to be read in context
The SWE-Bench Verified score of 77.6% is the marketing figure.
SWE-Bench Verified measures the model’s ability to resolve real bugs on public open source repositories, with human filtering.
The model is two points below Claude Sonnet 4.6 (79.6%) and several points below Claude Opus 4.7.
The test is in English and on public code: translating the score to French business tasks has no statistical validity.
A benchmark is an indication of a ceiling, not a success rate in production.
The only measure that counts for the CDC deployment is the internal ticket resolution rate, and no one has published it.
On τ³-Telecom, a multi-step telecom agent benchmark, Mistral Medium 3.5 reaches 91.4%.
To be tempered: this benchmark is designed and executed by Mistral, without independent replication.

Le Chat Work, Vibe Remote Agents, and MCP as agentic foundation
The model alone doesn’t make the product.
What CDC buys is a complete agentic stack: an assistant that orchestrates internal tools, writes documents, opens tickets, and validates before sensitive actions.
Work mode and Vibe: cloud sandbox and explicit approval
Work mode is an agent mode introduced in Le Chat on April 29, 2026, in preview for Pro, Team, and Enterprise.
It transforms Le Chat into a multi-step harness: the model plans, calls tools in parallel, shows its reasoning, and requests explicit approval before each sensitive action (sending messages, modifying data, creating tickets).
Under the hood, Work mode has a bash sandbox for code execution, web search, a Canvas for editing output, Libraries (file uploads), and MCP connectors.
Vibe Remote Agents, launched the same day, are the code-side equivalent: asynchronous coding sessions running in the cloud, in an isolated sandbox, with access to GitHub, Linear, Jira, Sentry, Slack, and Teams.
The analogy: a freelance developer working at night in a cloud sandbox, opening a PR in the morning on GitHub and leaving you the review.
MCP, the integration brick towards in-house IS
The Model Context Protocol standardizes access to internal tools from an AI agent.
Instead of writing a one-off API integration for each CRM, ERP, or document base, an MCP server is exposed that speaks a unique protocol.
All agents supporting MCP consume it without specific code.
Mistral has no official MCP server: the publisher chose a client approach, with Le Chat consuming external MCP servers.
The official directory includes 20+ native connectors (GitHub, Notion, Jira, Snowflake, Databricks, Stripe, etc.).
Since April 15, 2026, the Connectors API allows creating custom MCP connectors by programming, making business integration feasible for an SME: exposing Sage, Cegid, or Odoo via a custom MCP server and connecting Le Chat to it in a few days.
For the complete protocol mechanics, this complete guide to the Model Context Protocol details the architecture and implementation patterns.
What the contract validates for Mistral, what it hides
The CDC deal shifts Mistral from a “political champion” status to a strategic operational supplier status.
The real picture is more nuanced than the official narrative.
French enterprise traction: Banque Postale, France Travail, CMA-CGM
Mistral doesn’t arrive at CDC without experience.
The publisher already boasts a consistent list of French enterprise clients: La Banque Postale, France Travail, CMA-CGM, ASML, Moeve.
Annual revenue follows a trajectory of one billion euros in 2026, and the 830 million euro fundraising in March 2026 is used to build a data center near Paris.
NVIDIA, SecNumCloud, and upstream dependency
The displayed sovereignty is legal and geographical: EU hosting, ANSSI SecNumCloud qualification, French jurisdiction not subject to the US CLOUD Act.
It is not technological.
Mistral trains its models on NVIDIA GPUs, which hold about 95% of the AI accelerator market.
French software sovereignty floats on an American hardware foundation.
As long as NVIDIA dominates the GPU market and the CUDA coupling remains locked, talking about a 100% sovereign stack is inaccurate.
Mistral combines its own data centers (Bruyères-le-Châtel, Sweden via partnership) and cloud capacities with American hyperscalers for redundancy.
The full story on France’s position against American giants is in this article on Mistral AI’s digital sovereignty.
Adopting Mistral Medium 3.5 in a French SME: path and TCO
The CDC deal remains replicable on a smaller scale.
A French SME with 25 to 50 people can follow a structured evaluation path without involving Sopra Steria, provided it doesn’t confuse the tiers and budgets for hidden costs.
Le Chat Pro, Team, Enterprise, and the 3 deployment modes
Four tiers to distinguish on the Le Chat side.
Free: prompts potentially used for training, to be excluded for any professional use with sensitive data.
Pro at 14.99 € per month: no-training by default, Vibe CLI included, Work mode available.
Team at 24.99 € per seat: 200 messages per user, 30 GB storage, admin controls.
Enterprise on quote: SAML SSO, audit logs, granular controls.
The Enterprise tier comes in three deployment modes.
Multi-tenant cloud (Le Chat hosted by Mistral, shared infrastructure), dedicated VPC (Virtual Private Cloud, isolated cloud environment), and Self-Hosted on-premises.
For an SME with 25 seats without heavy sector constraints, Team suffices.
For a health, finance, or defense SME, dedicated VPC becomes the minimum standard.
DPA, SCC Module 4, and shadow AI governance
On the GDPR (General Data Protection Regulation) side, Mistral offers a DPA (Data Processing Addendum, data processing contract) via La Plateforme and Le Chat Enterprise.
The legal mechanism is the Module 4 of the SCC (Standard Contractual Clauses): Mistral processor to SME controller, EU hosting by default, opt-out training from Le Chat Pro.
The TCO (Total Cost of Ownership) trap for 25 Le Chat Team seats: the visible license costs 375 to 625 € per month depending on the discount.
The hidden part: 5 IT man-days for SSO (Single Sign-On), 2 days to validate the DPA with the legal department, 3 to 5 days of internal training, and a shadow AI prevention system (unauthorized parallel use of AI tools).
Real year 1 cost: multiply the license by 3 or 4.

Quantified comparison against Claude Opus 4.5 and GPT-5.5
The Mistral Caisse des Dépôts deployment raises a concrete question: which competitors does Mistral Medium 3.5 withstand, and on which axes do they falter?
- Mistral Medium 3.5: 256k context, $1.50 / $7.50 per million tokens, SWE-Bench Verified 77.6%, open weights Modified MIT, self-host 4 GPU.
- Claude Opus 4.5: 200k context, $5 / $25 per million tokens, SWE-Bench Verified 80.9%, closed weights, API only.
- GPT-5.5: up to 1M tokens context, $5 / $30 per million tokens, Terminal-Bench 2.0 at 82.7%, closed weights, US hosting.
The right trade-off depends on the cost / accuracy ratio for your tasks.
For document batch processing, Mistral Medium 3.5 is between 3.3 and 4 times cheaper per output token than Claude Opus 4.5.
For high-value frontier coding, Claude Opus 4.5 retains a measurable advantage.
For a French SME with sensitive data, the Mistral Medium 3.5 + EU DPA + self-host option combo remains unmatched by US competition.
The Mistral Caisse des Dépôts contract doesn’t make Mistral the European AI leader.
It makes it a strategic operational supplier for the French public sector, with an agentic stack aligned with market standards (MCP, Work mode, Vibe).
Sovereignty remains partial, NVIDIA dependency persists, and the integrator concentration around Sopra Steria and Computacenter deserves monitoring over the four-year contract.
For an SME wanting to replicate the pattern, the entry ticket starts at Le Chat Pro at 14.99 € per month.
To place this deal in Mistral’s roadmap, read the 22-proposal plan for European AI leadership.
FAQ: 10 questions about the Mistral CDC deployment
What’s the difference between Le Chat Pro, Team, and Enterprise?
Pro at 14.99 €/month (individual, opt-out training), Team at 24.99 €/seat (SME, admin controls), Enterprise on quote (SAML SSO, audit logs, three deployment modes).
How much does Mistral Medium 3.5 cost per million tokens?
$1.50 for input and $7.50 for output via the API.
Beyond 50 million tokens per day, self-hosting on 4 H100 GPUs becomes economically viable.
Does the CDC deployment really concern 100,000 public agents?
40,000 licenses at launch across 19 entities, targeting 100,000 users over the contract duration (up to 4 years).
Who integrates Mistral at Caisse des Dépôts?
The Sopra Steria + Computacenter consortium, single-award since February 26, 2026: consulting and fine-tuning by Sopra, SecNumCloud by Computacenter.
What exactly is SecNumCloud?
An ANSSI security qualification certifying that a cloud host meets requirements compatible with sensitive data, including protection against extra-European jurisdictions.
Is Mistral independent from the United States?
Legally yes: headquarters in Paris, EU hosting, outside the CLOUD Act.
Technologically no: training on NVIDIA GPUs, which hold 95% of the AI accelerator market.
Should you choose Mistral Medium 3.5 or Claude Opus 4.5?
Document batch processing and standard coding: Mistral is 3 to 4 times cheaper for a 3-point difference on SWE-Bench Verified.
High-value frontier coding: Claude Opus 4.5 retains a measurable advantage.
How does MCP work with Le Chat Work?
Le Chat Work consumes external MCP servers via a directory of 20+ connectors (GitHub, Notion, Jira, Snowflake, etc.).
Since April 2026, SMEs can create their own custom MCP servers (Sage, Cegid, Odoo).
Is Le Chat GDPR compliant for a French SME?
Yes from Le Chat Pro (opt-out training by default).
For sensitive data: DPA via La Plateforme or Le Chat Enterprise, Module 4 of the SCC activated.
What is the real TCO for 25 Le Chat Team seats in the first year?
Displayed license 375 to 625 € per month.
Add SSO, DPA, training, and shadow AI prevention: multiply the license by 3 to 4 for the full TCO.
Related Articles
Europe just rewrote the AI Act at 4am: we read the 7 changes for you
The Digital Omnibus of the AI Act has just rewritten the European AI timeline. In the early hours of May 7, 2026, around 4:30 am, the EU Council and the…
Workspace Agents OpenAI on credit from May 6, 2026: what’s changing for teams
Since May 6, 2026, Workspace Agents OpenAI are no longer free. The open test phase that began on April 21 closed at midnight Pacific time. Every action triggered by an…