Aller au contenu principal
Centrale industrielle bicéphale transférant un flux de particules de calcul d'un terminal X vers un emblème pétale lumineux

Anthropic rents Musk’s data center for Grok: what the SpaceX deal means for Claude users

Back to blog
Artificial Intelligence
Nicolas
12 min read
Centrale industrielle bicéphale transférant un flux de particules de calcul d'un terminal X vers un emblème pétale lumineux

On May 6, 2026, at 12:16 UTC, Anthropic released a statement resolving a simmering crisis over the past two months: the Anthropic SpaceX deal gives Claude direct access to Colossus 1, the data center Musk built in Memphis for Grok.

The agreement covers over 300 MW of capacity and more than 220,000 Nvidia GPUs, with a service launch announced within thirty days.

The tabloid narrative “Musk rents his servers to his rival” misses the real point.

The focus is industrial: computing capacity has become the real bottleneck for cutting-edge AI, and the first tangible consequence directly impacts the Claude Code quota for every developer using the tool daily.

Here’s what really happened, what changes today for Claude users, and what this deal signals for the AI compute market in the long term.

In brief

  • Check your new Claude Code limits: 5-hour sessions doubled for Pro, Max, Team, and Enterprise seat-based plans, end of throttling from 5 to 11 AM PT.
  • Restart your CI/CD stuck on Tier 1: the ITPM multiplier ×15 on Opus 4.7 boosts a pipeline from 10-15 PR/h to 150-225 PR/h.
  • Reframe your API budget: Opus 4.7 prices ($5 input, $25 output per million tokens) remain unchanged, only capacity opens up.
  • Anticipate geographic dependency: 100% of Anthropic’s compute stack is now in the US, making Memphis a single point of failure for European workloads.
  • Note the strategic signal: the mention of “multiple gigawatts of orbital AI compute capacity” in the official statement is not just rhetoric; it’s an admission of the limits reached by terrestrial grids.

What happened on May 6, 2026

The sequence took less than 24 hours.

At 12:16 UTC, Anthropic published “Higher limits via SpaceX” on its official blog, announcing a compute partnership with SpaceX, whose subsidiary x.ai operates Colossus 1.

The mirrored x.ai statement followed, repeating the same figures: over 300 MW, more than 220,000 Nvidia GPUs distributed among H100, H200, and the new GB200 generation.

Anthropic specified operational service within thirty days, with an immediate effect on usage limits for both the Claude product and API.

On May 7, Elon Musk posted on X the sentence that concluded the sequence and reshaped the SpaceX landscape:

xAI will be dissolved as a separate company, so it will just be SpaceXAI, the AI products from SpaceX.

The tweet confirmed what analysts had anticipated since the Colossus 2 announcement in January 2026: xAI ceases to exist as an autonomous entity and becomes a product line of SpaceX, integrated into the financial galaxy preparing its IPO in June.

This sequence marks three simultaneous shifts: Anthropic gains compute, SpaceX becomes a neocloud operator, and xAI disappears from the organizational chart.

None of these three moves would have been readable in isolation; their concurrence is what gives the deal its industrial signal value.

Colossus 1: the data center built in 122 days for Grok

To understand why Anthropic signed on Memphis and not elsewhere, we need to look at what Musk actually built there.

Supermicro architecture and liquid cooling

Colossus 1 is a Supermicro HGX cluster entirely cooled by direct-to-chip liquid, spread over 1500 racks in a converted former Electrolux factory.

The site was built in 122 days between July and November 2024, an industrial record driven by Musk’s competitive pressure against OpenAI and Google.

The achieved density makes Colossus 1 one of the three densest clusters in the world, with an announced PUE below 1.15.

35 gas turbines and Tesla Megapacks

The power supply relies on 35 gas turbines delivering 420 MW at peak, complemented by a farm of Tesla Megapacks to smooth out load peaks.

This autonomy was designed to bypass the local TVA grid, whose available capacity was deemed “marginal” by Memphis Light & Gas engineers in the specialized press.

The choice of Memphis meets three criteria: available industrial land, acceptable latency to tier-1 DC hubs, and Tennessee’s favorable compute taxation.

Why Memphis beats the West Coast

The West Coast concentrates talent and capital but remains saturated at 95% on the electrical grid in the Bay Area and Seattle counties.

In 2024, Memphis offered a rare window: a 2.5 million square foot site, direct gas connection, and accelerated permitting.

This is exactly the type of site Anthropic could not have commissioned on its own without engaging in 24 months of dedicated work.

Two industrial ramps in dramatic perspective, the longest illuminated in electric blue towards a glowing petal emblem

Why Anthropic signed now

The timing of the deal is no accident; it addresses an officially recognized operational crisis.

The March-April 2026 quota crisis

Between March and April 2026, Anthropic publicly admitted that 7% of Pro users experienced peak-hour throttling on Claude Code between 5 and 11 AM Pacific Time.

The Register, then Axios, relayed complaints from power users who saw their agent sessions cut off before the current job was completed.

Anthropic issued an internal mea culpa on March 31, 2026 and then adjusted its weekly quotas in early April.

This correction was not enough: the growth of agentic sessions, combined with the launch of Opus 4.7 at the end of April, put the system under pressure again.

The multi-supplier strategy gets a new piece

The SpaceX deal completes an already large compute puzzle.

Anthropic has successively signed with AWS Trainium (main training), Google TPU v5p (specific workloads), Broadcom (custom silicon by 2027), and Fluidstack (European backup capacity).

SpaceX becomes the fifth major compute supplier, with a particularity: it’s the only one providing immediate high-end Nvidia compute, whereas other deals involved alternative silicon or long-term horizons.

For teams wanting to manage their API usage, the deal’s timing changes the calculation of levers to keep their API bill under control: the opened capacity shifts the moment when they need to start arbitrating between Sonnet and Opus.

Why Musk is dissolving xAI

The move on SpaceX’s side seems abrupt, but it responds to strict industrial rationality.

Colossus was running at 11% utilization

According to a SemiAnalysis note published on May 4 and picked up in the specialized press, xAI was operating at about 11% of its cumulative fleet of 550,000 GPUs distributed between Colossus 1 (220,000 GPUs) and Colossus 2 (330,000 GPUs started in January 2026).

By comparison, the same report estimates the average utilization rate of Meta and Google’s fleets on their own internal workloads at 43-46%.

xAI’s compute footprint had become structurally too large for its own product roadmap.

An airline that carries its employees at 11% occupancy while its seats remain empty eventually sells them to a competitor: that’s exactly what SpaceX just did with Anthropic.

SpaceX IPO scheduled for June 2026

SpaceX is preparing its IPO for June 2026, targeting a valuation of around $400 billion.

A recurring compute revenue stream valued on a take-or-pay basis makes the multiple more defensible than a consumer AI activity still seeking product-market fit.

SemiAnalysis estimates this take-or-pay contract at about $5 billion per year, a figure not confirmed by Anthropic and should be read as an analyst estimate, not official data.

SpaceXAI in neocloud logic

The shift from xAI to SpaceXAI reflects an industrial choice: stop chasing OpenAI on the generalist model and become a capacity operator.

The neocloud includes players like CoreWeave, Lambda Labs, Crusoe: operators renting high-end Nvidia GPUs to AI labs on long-term contracts.

SpaceX joins this category with a rare advantage: a site already built, already powered, already underutilized.

What changes for Claude users starting May 6

The deal was designed to produce a visible effect on the product side in less than a month, which is what the French-speaking dev reader retains.

Claude Code quotas: 5-hour sessions doubled

Anthropic doubles the Claude Code rate limits on the 5-hour rolling window for Pro, Max, Team, and Enterprise seat-based plans.

The peak-hour throttling from 5 to 11 AM PT that affected Pro and Max also disappears: a Parisian dev launching an agent at 2 PM local time hit precisely this window, potentially losing half of their session on a poorly calibrated job.

The analogy holds: it’s like moving from an Internet plan throttled at peak hours to an unlimited plan during the effective workday.

The weekly limits remain unchanged, as do the free tier conditions, which was expected and clarified in the statement.

To measure the concrete impact, a typical Claude Code agent usage goes from 40-80 prompts per 5-hour window to 80-160 prompts, providing the necessary margin to run a full day without triggering the intermediate quota.

Those already managing their sessions with the /ultraplan command in Claude Code gain directly exploitable leeway on their long runs.

API Opus 4.7: Tier 1 takes a step

The second effect is less visible but more structuring for teams running Claude in production.

On Opus 4.7, Tier 1 sees its ITPM (input tokens per minute) multiplied by 15, or about +1500%, and its OTPM (output tokens per minute) multiplied by 9, or about +900%.

In practical terms, a CI code review pipeline that was capped at 10-15 pull requests per hour in Tier 1 now reaches 150-225 PR per hour without changing the rate.

Tier 1 was considered a friendly entry ticket but unsuitable for production: after May 6, it reaches the capacity level of April 2026’s Tier 2.

The line between prototype and production moves back two steps, changing the calculus for back-end teams hesitating to move to Tier 2 due to significant monthly costs.

Opus 4.7 prices remain unchanged: $5 per million tokens in input, $25 per million in output.

What doesn’t change

Free tier unchanged, batch processing still subject to the same ratios, constant weekly limits.

The deal unlocks capacity, it doesn’t lower the unit price.

For organizations looking to arbitrate between offers, the Claude vs ChatGPT pricing comparison remains the useful read to decide which model absorbs which type of workload.

Monumental fleet of GPU racks suspended in the void, routed to a docking platform marked with a sulfur-yellow petal emblem

What the Anthropic SpaceX deal means for the future

Three horizon lines deserve to be watched over the next six months.

Geographic concentration and single point of failure

With this deal, 100% of Anthropic’s compute stack is on US soil.

Memphis adds concentration: the 555,000 GPUs distributed across Colossus 1 and 2 are on a single site, in a single jurisdiction, on a Mississippi floodplain.

A major outage in Memphis wouldn’t just bring down Anthropic; it would absorb a visible portion of the global Claude API capacity.

The European sovereignty angle is barely mentioned in the press, but it will be structuring for DPOs and legal departments looking at the new obligations of the AI Act.

Pricing and arbitrations over 6 months

The deal unlocks capacity without affecting Opus 4.7 prices, which remain at $5 input and $25 output per million tokens.

The open question: with regained capacity margin, can Anthropic lower its Opus prices in Q3 2026 to align its offer with GPT-5.5?

The market will watch the first quarterly statements with keen attention.

Orbital datacenters: signal, not roadmap

The most discreet sentence in Anthropic’s statement is also the most telling:

As part of this agreement, we have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity.

The language remains exploratory, with the industrial consensus placing this type of capacity at 5 to 10 years minimum.

The signal is elsewhere: if two players as serious as Anthropic and SpaceX publicly articulate this scenario, it’s because the limit reached by terrestrial grids is now accepted as a near horizon for the next generation of clusters.

The comparison with Stargate (OpenAI, Oracle, SoftBank) aiming for 5 GW on US soil by the end of 2026 takes on new significance: if the race continues, expansion will sooner or later move beyond the atmosphere.

Conclusion

The Anthropic SpaceX deal of May 6, 2026, resolves a dated capacity crisis and launches three industrial trajectories: neocloud consolidation on SpaceX’s side, multi-supplier compute diversification on Anthropic’s side, and serious exploration of orbital compute as an extension of saturated grids.

For readers, the return is immediate: doubled Claude Code sessions, API Tier 1 usable in production, financial leeway to audit their own pipelines.

To measure what this new capacity unlocks budget-wise, the concrete levers on the API side deserve careful reading before relaunching aggressive usage on Opus.

What remains to be monitored over the next six months: the stability of the new quota regime, the operational dependency on Memphis, and the potential pricing shift triggered by GPT-5.5 pressure on the Opus segment.

FAQ

How long is the Anthropic SpaceX contract?

Anthropic and x.ai did not disclose an official duration in the May 6, 2026 statements.

Does the Anthropic SpaceX deal include future Colossus 3 and 4?

The published agreement covers Colossus 1 only (220,000 GPUs and 300 MW), with no mention of future extensions.

Why does Musk’s tweet mention SpaceXAI and not SpaceX AI?

SpaceXAI refers to the unified entity absorbing the former xAI brand within the parent company SpaceX, a naming consistent with Starlink and Starshield.

Does the deal change anything for Claude users in Europe?

The new quotas apply uniformly, but the physical compute remains in Memphis, raising questions about data transfers for workloads subject to European localization obligations.

Does the Claude free tier benefit from the new SpaceX compute?

No, Anthropic specifies that the free tier remains at its previous limits and that the benefit targets Pro, Max, Team, Enterprise, and API Tier 1+.

What do ITPM and OTPM mean in Anthropic’s API announcements?

ITPM means Input Tokens Per Minute (max input rate per minute), OTPM refers to Output Tokens Per Minute (symmetrical rate for generated tokens).

Can Anthropic lower its Opus 4.7 prices thanks to this deal?

The deal unlocks capacity without affecting the marginal cost of the Opus 4.7 token; a price drop would depend on GPT-5.5 and DeepSeek V4 pressure on the frontier segment.

How does the deal compare to Stargate (OpenAI, Oracle, SoftBank)?

Stargate aims for 5 gigawatts on US soil by the end of 2026 (over sixteen times Colossus 1) in a long greenfield, whereas the SpaceX deal activates existing capacity immediately.

What should be done concretely on May 7, 2026, with these new quotas?

Check your Claude Code limits in the console, restart CI pipelines calibrated on the old ITPM limits, and audit your Opus budget before opening the tap.

Is the orbital compute mentioned by Anthropic and SpaceX credible in the short term?

The industrial consensus places operational orbit deployment at 5-10 years minimum; the mention in the statement remains exploratory and mainly reflects pressure on terrestrial grids.

Related Articles

Ready to scale your business?

Anthem Creation supports you in your AI transformation

Disponibilité : 3 nouveaux projets pour Mai/Juin
Book a discovery call
Une question ?
✉️

Encore quelques questions ?

Laissez-moi votre email pour qu'on puisse continuer cette conversation. Promis, je garde ça précieusement (et je ne vous bombarderai pas de newsletters).

  • 💬 Accès illimité au chatbot
  • 🚀 Des réponses plus poussées
  • 🔐 Vos données restent entre nous
Cette réponse vous a-t-elle aidé ? Merci !