On March 10, 2026, Yann LeCun announced the largest fundraise ever completed by a European startup at the seed stage: $1.03 billion for his laboratory AMI Labs, founded after his departure from Meta.
That number alone doesn’t capture what’s at stake.
LeCun, winner of the 2018 Turing Award and one of the founding fathers of modern deep learning, does not believe that large language models (LLMs) will lead to artificial general intelligence.
His thesis, world models, represents a deep architectural break, and this billion signals that some of the most sophisticated investors on the planet share that diagnosis.
Key takeaways:
- AMI Labs raised $1.03B at a $3.5B valuation: an absolute record for a European seed round, with zero products and zero revenue
- World models learn the physical laws of the world from video, while LLMs predict text tokens: two structurally different paradigms
- JEPA, AMI Labs’ core architecture, already allows robots to generalize to objects they’ve never seen, without reprogramming
- For French tech decision-makers, LLMs remain relevant in the short term: the first world model products are years away, not months
- AMI Labs reflects a French AI sovereignty strategy, with its headquarters in Paris, Macron’s backing, and investment from major French industrial groups
The numbers behind a historic raise
$1.03B: a European record
The AMI Labs fundraise surpasses anything Europe has ever seen: $1.03 billion at seed stage, at a pre-money valuation of $3.5 billion.
For context, Mistral AI‘s record raise in June 2023 totaled €113 million: AMI Labs raised nearly 9 times more, with zero commercial product and zero revenue, and roughly a dozen employees at the time of the announcement.
Strategic investors, not just financial ones
The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Jeff Bezos Expeditions.
The participant list speaks to the strategic weight of the bet: Nvidia, Temasek, Toyota Ventures, Samsung, alongside Xavier Niel, Eric Schmidt (former Google CEO), Tim Berners-Lee, and French industrial groups Dassault and Mulliez.
The geographic split is balanced: one third from North America, one third from Europe, one third from Asia.
A team from FAIR and Nabla
Yann LeCun serves as Executive Chairman, while retaining his position as a professor at New York University.
The CEO is Alexandre LeBrun: founder of VirtuOz (acquired by Microsoft via Nuance), Wit.ai (acquired by Meta), and co-founder of Nabla, which today serves more than 85,000 physicians across 150 healthcare organizations.
The founding team includes Michael Rabbat (VP World Models, former director of FAIR Montréal), Saining Xie (CSO, co-creator of the DiT diffusion transformers powering OpenAI’s Sora), Pascale Fung (Chief Research Officer), and Laurent Solly (COO, former VP Europe at Meta).
“AMI Labs is a very ambitious project, because it starts from fundamental research.”
“This is not a typical AI startup that can ship a product in three months, generate revenue in six, and hit $10M ARR in twelve.”
Alexandre LeBrun, CEO of AMI Labs
World models vs LLMs: the architectural break
The limits of LLMs
A large language model learns by predicting the next token in a sequence of text: it sees sentences, not the physical world.
This approach produces remarkable systems for writing and code, but generates structural flaws: LLMs hallucinate with confidence, struggle with genuinely novel situations, and fail at tasks requiring an intuitive grasp of physics.
Recent Anthropic research on AI survival instincts and the limits of alignment illustrates this precisely: these models develop unanticipated behaviors because they optimize textual patterns, not a causal representation of the world.
JEPA explained simply
The architecture at the heart of AMI Labs is called JEPA (Joint Embedding Predictive Architecture), introduced by LeCun in his 2022 foundational paper: A Path Towards Autonomous Machine Intelligence.
Rather than predicting pixels or tokens, JEPA learns to predict abstract representations of the future state of an environment.
Picture a child watching a ball roll across a table: they don’t memorize every pixel, they build a mental model of the trajectory, the likely bounce, the effect of gravity.
JEPA does the equivalent in latent space: the architecture learns the rules of the world, not its surface details.
Meta has published concrete versions: I-JEPA (images), V-JEPA 2 (trained on more than one million hours of internet video, already integrated into physical robots for zero-shot generalization), and VL-JEPA (vision and language in a shared space).

LLM vs world model: a comparison
| Criterion | LLM (GPT, Claude, Mistral…) | World model (JEPA) |
|---|---|---|
| Training data | Text, tokens | Video, sensory data |
| Learning objective | Predict the next token | Predict the future state of the world |
| Strengths | Language, code, summarization | Physics, planning, robotics |
| Limitations | Hallucinations, intuitive physics | Experimental, no commercial product |
| Maturity in 2026 | Large-scale production | Fundamental research |
Why LeCun says LLMs aren’t enough
The driving analogy
LeCun uses an analogy that has become well-known: learning to drive by reading books.
You can read every driving manual in the world and describe traffic rules, how an engine works, how to handle a skid.
No amount of text can replace the hours of embodied practice that build a driver’s real reflexes.
“We don’t have autonomous cars that can learn to drive in 20 hours of practice, the way a 17-year-old can.”
Yann LeCun
An 18-month-old intuitively understands gravity: they know that a dropped object falls, that water flows downward, that towers of blocks topple over.
That understanding doesn’t come from reading: it comes from sensory and motor experience accumulated since birth.
From words to the physics of the world
When an LLM “knows” that water boils at 100°C, it has learned a statistical correlation between tokens, without ever observing a pot of boiling water or witnessing the liquid-to-gas transition.
This distinction between knowing how to talk about the world and understanding how the world works is the core of LeCun’s argument.
A world model trained on video directly observes cause-and-effect relationships: objects falling, materials deforming, fluids flowing.
What LLMs do very well
LeCun doesn’t claim LLMs are going away: they excel at language tasks, code, document summarization, and abstract reasoning.
His thesis is more nuanced: LLMs have hit a structural ceiling for applications that require an embodied understanding of the world.
That ceiling won’t be lifted by adding more parameters or more text data: it calls for an architectural break.
AMI Labs: concrete applications
Industrial robotics
Today, reprogramming an industrial robot for a new box format requires hours of line downtime and specialist intervention.
A robot equipped with a world model could mentally simulate different gripping strategies, evaluate which is physically viable, and execute it without reprogramming.
V-JEPA 2 demonstrations on physical robots already show this capability: objects never seen during training handled successfully in entirely new configurations.
Healthcare: the Nabla case
Nabla was announced as AMI Labs’ first strategic partner in December 2025, with early access to world model technologies currently in development.
The goal: moving from clinical documentation assistance (the LLM transcribes and summarizes) to clinical decision support (the world model simulates how a treatment propagates through the body).
Healthcare is the domain where LLM hallucinations carry the heaviest consequences, and where the causal reliability of a world model would have the highest value.
Autonomous vehicles
Self-driving cars need to understand rare but critical scenarios: unpredictable pedestrian behavior, extreme weather, emergency vehicles appearing from a blind intersection.
A world model can generate simulations of these edge cases, enabling systems to be tested before the situation ever occurs in the real world: Waymo is already developing its own approach based on DeepMind’s Genie 3.
A world model doesn’t drive the car: it lets the car imagine the consequences of a decision before acting on it, where a pure LLM can only describe what it would do.
The competitive landscape for world models in 2026
World Labs and Marble
World Labs, founded by Fei-Fei Li (former director of Stanford’s Human-Centered AI Institute), raised $1 billion in February 2026, on par with AMI Labs.
Its flagship product, Marble, generates coherent and persistent 3D environments from text, images, or video, with applications in content creation and simulation.
World Labs has a tactical edge over AMI Labs: a commercial product already available, where AMI Labs is still in pure research mode.
Google DeepMind and Genie 3
Google DeepMind has published Genie 3, a general world model capable of generating photorealistic and interactive 3D environments from text descriptions.
Google’s entry confirms what AMI Labs’ capital had already signaled: world models have become a strategic priority for the entire industry.
The hybrid approach: LLMs and world models
Most serious researchers don’t think in terms of replacement, but integration.
A hybrid system would use the LLM for language understanding and abstract reasoning, while the world model handles physical planning and consequence simulation.
Alexandre LeBrun himself warned against “world model washing“: the temptation, already visible in marketing rhetoric, to rebrand standard LLM systems as world models.
The thinking around AI systems capable of autonomous learning, like Karpathy’s Autoresearch, follows the same logic: moving beyond the token paradigm toward more robust architectures.

What this means for French tech decision-makers
LLMs remain relevant in the short term
For a French company using ChatGPT, Claude, or Mistral in its workflows today, nothing changes in the near term.
AMI Labs’ world models are in fundamental research mode: the CEO himself has stated that the first commercial products are years away, not months.
Watch robotics, industry, and healthcare
If your business touches logistics, manufacturing, healthcare, or transportation, AMI Labs warrants active monitoring starting now.
These are precisely the sectors where world models will have the most direct applications over the medium term.
The next 18 months will be telling: AMI Labs has announced its focus on publishing research and open-source code, which means the first concrete signals on the validity of the approach will be public.
AMI Labs as a French sovereign AI unicorn
After Mistral AI, AMI Labs represents a second strong signal of France’s ability to attract and host world-class AI research.
The explicit backing of Emmanuel Macron, investment from Bpifrance and major French industrial groups (Dassault, Mulliez) sketch a clear ambition: building a sovereign third path between American and Chinese giants.
“Yann LeCun is opening a new chapter in artificial intelligence.”
“He has achieved one of the largest fundraises ever, €1 billion with AMI Labs, to transform AI.”
“This is the France of researchers, builders, and bold innovators.”
Emmanuel Macron, March 2026
To understand how France is building this strategic independence in AI, our analysis of Mistral AI’s digital sovereignty strategy and its rivalry with American giants provides the essential context.
AMI Labs: fundamental research as a strategic bet
AMI Labs doesn’t claim to have solved artificial general intelligence: the company starts from zero on the commercial side, with no product, no revenue, and a timeline measured in years.
But $1.03 billion raised from investors who understand the technology deeply sends a clear signal: the LLM paradigm is not the final destination.
World models represent the missing piece for AI systems capable of understanding, planning, and acting in the physical world, where LLMs can describe but not genuinely comprehend.
The risk is real: world models remain experimental, fundamental research takes time, and no commercial product is on the immediate horizon.
For professionals following AI, the agenda is clear: master LLMs today, watch world models tomorrow, and anticipate the hybrid architectures that will combine both paradigms.
To stay on top of world model developments and French AI research, subscribe to our newsletter and receive our analyses every week.
FAQ: AMI Labs and world models
What is AMI Labs?
AMI Labs (Advanced Machine Intelligence Labs) is a research laboratory founded by Yann LeCun after leaving Meta at the end of 2025, headquartered in Paris, dedicated to developing world models as an architectural alternative to LLMs.
AMI Labs fundraise: amount and valuation
AMI Labs raised $1.03 billion in March 2026 at a pre-money valuation of $3.5 billion, setting the absolute record for a European seed round and far surpassing Mistral AI (€113M in 2023).
AMI Labs founders
Yann LeCun (Executive Chairman, 2018 Turing Award) and Alexandre LeBrun (CEO, co-founder of Nabla) lead the company, alongside Michael Rabbat (VP World Models), Saining Xie (CSO), Pascale Fung (Chief Research Officer), and Laurent Solly (COO).
What is a world model?
A world model is an AI system trained on sensory data such as video (not text) to understand how physical environments evolve, build an internal representation of the world, and simulate the consequences of actions before executing them.
Core differences between LLMs and world models
An LLM learns statistical correlations between text tokens: it can describe physics without understanding it, whereas a world model learns the causal rules of the physical world from visual and sensory data, achieving a level of generalization that LLMs cannot reach for embodied tasks.
AMI Labs’ JEPA architecture
JEPA (Joint Embedding Predictive Architecture) is AMI Labs’ core architecture, developed by LeCun since 2022: rather than predicting pixels or tokens, it predicts abstract representations of the future state of an environment to learn the deep structure of the world.
The future of LLMs alongside world models
LLMs are not going away: they remain effective for language, code, and abstract reasoning, and most experts expect hybrid systems combining LLMs and world models to emerge rather than a full replacement.
Timeline for AMI Labs’ first products
The first year is dedicated to fundamental research and hiring: discussions with corporate partners are expected within 6 to 12 months of the company’s founding, but the first commercial products are anticipated on a horizon of several years.
AMI Labs’ main competitors in world models
Direct competitors include World Labs (founded by Fei-Fei Li, Marble product already available, $1B raise in February 2026), Google DeepMind with Genie 3, and Nvidia with Cosmos, a world model platform for physical AI downloaded more than 2 million times by early 2026.
AMI Labs and French sovereignty
AMI Labs, headquartered in Paris with a French CEO, is backed by Bpifrance, the Dassault and Mulliez groups, and Xavier Niel, and has the public support of Emmanuel Macron, who sees it as a pillar of France’s digital sovereignty strategy.
Related Articles
Claude Cowork now available: AI collaboration for all subscribers
On April 9, 2026, Anthropic reached a significant milestone: Claude Cowork transitioned from research preview to General Availability (GA), opening access to all paying subscribers. No more exclusive status. Whether…
Gemini boosts Gmail: AI productivity at the cost of privacy
Google has quietly turned Gmail into a full-fledged office assistant. Since the integration of Gemini Gmail AI, summarizing a 50-message thread takes 10 seconds, drafting a professional reply takes 5….