On April 8, 2026, Anthropic launched a public beta service that changes how developers and businesses deploy AI agents: Claude Managed Agents. The core idea is simple yet powerful: provide a fully managed infrastructure to run autonomous agents based on Claude, without anyone having to build or maintain the underlying technical layers. Say goodbye to homemade sandboxing, handcrafted state management, and fragile agent loops. Anthropic handles all of that, allowing you to focus on business logic.
Key takeaways:
- Zero infrastructure: Anthropic manages hosting, scaling, and monitoring, saving you months of infrastructure work.
- The service costs $0.08 per session hour in addition to the standard API Claude token prices.
- Notion, Rakuten, and Sentry are already in production with concrete and measurable use cases.
- Claude Code’s autonomy has doubled in three months, from 25 to 45 minutes on the longest sessions.
- The Brain/Hands/Session model decouples reasoning from execution, allowing each component to evolve independently.
A three-component architecture that changes everything
To understand what Claude Managed Agents truly brings, you need to look under the hood. The service virtualizes three fundamental components of an AI agent:
Also read: Command /ultraplan Claude Code – Faster and more powerful planning
- The session: an append-only log of all interactions, maintaining the agent’s state on long tasks, sometimes lasting several hours.
- The harness: the loop that calls Claude and routes tool calls to the relevant infrastructure. It’s the conductor.
- The sandbox: the secure execution environment for code, file editing, and interactions with external services.
This Brain/Hands/Session decoupling is the real architectural innovation. Previously, these three elements were often mixed in monolithic systems that were difficult to evolve. With Claude Managed Agents, each component can evolve independently. You can change the execution logic without affecting the reasoning, and vice versa.
The service acts as a flexible meta-harness, compatible with specific harnesses like Claude Code, the harness dedicated to agent coding workflows. This modularity ensures long-term scalability that monolithic architectures cannot offer.
Agent Teams vs Subagents: two very different coordination modes
Within Claude Managed Agents, two operating modes coexist, and the choice between them has a direct impact on costs and results.
Agent Teams: parallel collaboration with independent contexts
Agent Teams coordinate multiple Claude instances simultaneously. A lead agent assigns tasks via a shared list structured in three states: pending, in progress, completed, with dependency management between tasks. Teammate agents can then self-coordinate and communicate directly with each other, without going through the main agent.
Each teammate has its own independent context. This fundamentally distinguishes them from subagents, which operate in the same session as the main agent and only report results. For complex tasks requiring parallel work, such as developing a feature while another agent performs code review, Agent Teams are clearly superior.
Subagents: efficiency on targeted tasks
Subagents remain in a single session and report only to the main agent. Their token cost is lower, making them the right choice for targeted and well-defined tasks. If your workflow is sequential and tasks do not benefit from parallelism, subagents are more than sufficient.
| Criterion | Agent Teams | Subagents |
|---|---|---|
| Contexts | Independent per agent | Shared session |
| Communication | Direct between teammates | Report to main agent only |
| Token cost | Higher | Lower |
| Ideal for | Complex parallel work | Sequential targeted tasks |
Tip: Always start with subagents to validate your business logic. Migrate to Agent Teams only when you’ve identified tasks that benefit from parallelism. You’ll save tokens and clarify your architecture.

Three production use cases that prove the service’s value
Product announcements are one thing. Real deployments are another. From the public beta launch, three companies confirmed their adoption of Claude Managed Agents with concrete applications.
Notion: automated workspace delegation
Notion uses Claude Managed Agents for workspace delegation. Hosted Claude agents handle complex tasks in the platform’s collaborative environments autonomously and without infrastructure setup on Notion’s side. Execution integrates directly with Notion’s native tools, allowing users to delegate entire workflows rather than isolated actions.
Rakuten: agents in Slack via Claude Cowork
Rakuten has deployed enterprise agents directly in Slack, relying on Claude Cowork, the Managed Agents component that supports desktop automations, local files, and third-party applications. The agents manage scaling and monitoring automatically, freeing teams from any technical supervision of communication workflows.
Sentry: automated debugging in production environments
Sentry applies Claude Managed Agents to automated debugging. The agents analyze and resolve errors directly in production environments via the hosted API, without Sentry needing to maintain its own sandboxes. It’s a DevOps use case that illustrates the service’s value proposition: dynamic execution adapts to the nature of each error without prior manual configuration.
These three early adoptions cover very different sectors (collaborative productivity, e-commerce, DevOps), confirming the architecture’s versatility. For more on the comparison between Anthropic agents and their competitors, the differences in positioning are enlightening.
Autonomy measures: figures showing real progress
Beyond announcements, Anthropic publishes internal data on the evolution of its agents’ autonomy. These figures are the most concrete available to assess the service’s maturity.
On Claude Code, the autonomy of long sessions has doubled in three months, from 25 to 45 minutes without interruption. Meanwhile, the number of human interventions per session has dropped from 5.4 to 3.3, a reduction of nearly 40%. The success rate on complex tasks has also doubled over the same period.
On the user side, beginners self-approve about 20% of sessions in full autonomy. This figure rises to 40% for experienced users, who have learned to calibrate their trust in the agent based on the task type.
An interesting behavior emerges on complex tasks: Claude requests clarifications twice as often as on simple tasks and self-stops when uncertainty is too high. This built-in caution mechanism limits costly errors in production, at the cost of a slight loss of autonomy on edge cases.
Attention point: Claude Managed Agents is in public beta. Agent behaviors may evolve between releases. For critical workflows, plan a validation phase and maintain human checkpoints on high-impact decisions.
Pricing, prerequisites, and positioning against alternatives
Access to Claude Managed Agents is open to all Anthropic API accounts, in public beta by default. You just need to use the managed-agents-2026-04-01 header in your requests (the Claude SDK adds it automatically).
Pricing consists of two elements:
- $0.08 per session hour, regardless of task complexity.
- The standard API Claude token prices, unchanged.
Execution occurs exclusively on Anthropic’s infrastructure. This is both an advantage (no server management) and a constraint (no on-premise or native multi-cloud deployment). For companies with strict data sovereignty requirements, this point deserves attention.
Compared to a solution like LangChain, Claude Managed Agents is preferable when you want to go into production quickly without building your own orchestration. LangChain offers more flexibility in model choice but requires a significantly higher engineering effort. If your stack is entirely Claude-oriented and your tasks are long and asynchronous, Anthropic’s managed service is hard to beat. The broader context of autonomous agents is well analyzed in Sam Altman’s vision on the future of AI agents.

Sector impact: from legal tech to extended enterprise
The launch of Claude Managed Agents is part of a broader trend: the verticalization of AI agents. Rather than offering a generic tool, Anthropic targets sectors where workflows are complex, repetitive, and costly to manage manually.
Legal tech is a particularly interesting example. Law firms and corporate legal teams can now build specialized agents (contract analysis, regulatory monitoring, case preparation) without relying on specialized software vendors. The infrastructure is provided by Anthropic, while the business logic remains internal.
Anthropic has also confirmed a contract with Allianz for building customized agents in insurance, including the deployment of Claude Code for technical teams. This is the first major sectoral deployment of 2026, illustrating how regulated sectors are gradually adopting managed agents.
Anthropic’s revenue growth validates this dynamic: the run-rate exceeds $2.5 billion in the agent dev segment, recently doubled. EMEA revenues have increased ninefold in twelve months, driven notably by the opening of Paris and Munich offices at the end of 2025. To understand how Claude positions itself against its direct competitors in the AI model market, the comparison between Claude and ChatGPT remains a useful reference.
The official documentation for Claude Managed Agents is available on Anthropic’s engineering blog, with orchestration patterns, getting started guides, and SDK references.
Conclusion
Claude Managed Agents represents a maturity step in the industrialization of AI agents. The Brain/Hands/Session decoupling, real data on Claude Code’s growing autonomy, and early adoptions at Notion, Rakuten, and Sentry show that the service goes beyond the concept stage. At $0.08 per session hour, the entry barrier is low for teams wanting to test agent workflows in production without infrastructure investment.
Limitations exist: public beta, dependency on Anthropic’s infrastructure, higher token costs for Agent Teams. But for the majority of enterprise use cases, these constraints are largely offset by the speed of access to production. Anthropic’s pivot to managed agents is not a gamble: it’s a direct response to market demand, measured and documented.
FAQ
What exactly is Claude Managed Agents?
Claude Managed Agents is a service launched by Anthropic on April 8, 2026, providing a hosted infrastructure to deploy and run autonomous AI agents based on Claude. It handles sandboxing, state management, and tool execution for developers, who can focus solely on their agent’s logic. Access is open to all Anthropic API accounts in public beta, via the managed-agents-2026-04-01 header.
What’s the difference between Agent Teams and subagents?
Agent Teams coordinate multiple Claude instances with independent contexts, direct communication between agents, and a shared task list. Subagents operate in the same session as the main agent and only report results to it. Agent Teams are suited for complex parallel tasks but cost more in tokens; subagents are more economical for targeted and sequential tasks.
How much does Claude Managed Agents cost?
The service charges $0.08 per session hour, in addition to the standard API Claude token prices. No fixed subscription or additional infrastructure cost. Execution is exclusively on Anthropic’s servers. For short or infrequent tasks, this model is very accessible. For large volumes of Agent Teams, the token costs of each separate Claude instance add up and should be anticipated.
Which companies are already using Claude Managed Agents?
Three companies confirmed their adoption from the public beta launch: Notion for collaborative workspace delegation, Rakuten for enterprise agents in Slack via Claude Cowork, and Sentry for automated debugging in production environments. Anthropic also announced a contract with Allianz for customized agents in the insurance sector.
Is Claude Managed Agents suitable for beginners without DevOps experience?
Yes. The service is designed to eliminate infrastructure complexity. You need an Anthropic API key (free upon registration), and the SDK automatically adds the necessary beta header. You configure your agent via the Claude console, Claude Code, or CLI, then launch a session. No servers to manage, no sandbox to configure. Novice users start with about 20% of sessions in full auto-approve, a rate that rises to 40% with experience and accumulated trust.
Related Articles
Gemma 4: Google adopts Apache 2.0, reshaping open-source AI
On April 2, 2026, Google released Gemma 4 with a change that made more waves than the benchmarks themselves: the move to Apache 2.0 licensing. This technical detail is actually…
Claude Mythos: We Are Not Ready!
On April 7, 2026, Anthropic officially announced the existence of Claude Mythos Preview, its most powerful frontier model to date. The news didn’t surprise anyone in tech circles: by late…