It's no longer just for developers
Building an AI agent has until now required a developer team, your own servers, and a lot of technical work on infrastructure – orchestration, error handling, context management, security. For most Norwegian SMBs, that's been a barrier that keeps agents something you read about, not something you actually adopt.
On 8 April 2026, Anthropic changed this. They launched Claude Managed Agents in public beta – a managed service where Anthropic handles the entire infrastructure for you.
You define what the agent should do. They make sure it runs.
What is Claude Managed Agents?
Claude Managed Agents is a service on Anthropic's platform where businesses and developers can set up AI agents without building or operating the infrastructure themselves.
Here's how it works in practice:
You define the agent — either by describing it in natural language or via a YAML configuration file. What should the agent do? Which tools does it have access to? What boundaries should it operate within?
You set up guardrails — rules for what the agent should not do. That might be: never process personal data, always request approval before deleting anything, only retrieve data from approved sources.
Anthropic runs the agent — and handles all the technical elements: secure sandboxed code execution, authentication, checkpointing (the agent can resume work after interruptions), scoped permissions, and long-running sessions that last hours, not seconds.
That means the agent can work through complex, multi-step processes – without anyone supervising – and without your business needing to operate the servers it runs on.
What distinguishes this from a regular chatbot?
A chatbot waits for you to ask. It gives an answer. You act on the answer yourself.
An agent with Claude Managed Agents receives an assignment and carries it out. It reads files, retrieves data from systems, makes decisions along the way, and produces a result – not a draft you need to finalise, but a concrete output.
An example: an agent can go through all incoming supplier quotes from the past week, compare them against historical prices in the system, and send you a summary with a recommendation – without you being involved in each individual step.
That's not what chatbots do. It's something fundamentally different.
Who is this for right now?
The service is in public beta, and the first enterprise customers already up and running include Notion, Rakuten, and Asana. These are large international companies with their own developer teams.
For Norwegian SMBs, there are two things to note:
One: it's now possible to build and run production-ready agents without a dedicated development environment. The complexity has been dramatically reduced. What used to take months to set up can now take days.
Two: you still need to know what the agent should do, which data it should have access to, and which rules it should follow. The technical part is simpler – but the organisational and legal work is the same.
What isn't solved: governance and structure
Claude Managed Agents removes the technical barrier. It doesn't remove the need for structure.
Access management: An agent acts with the permissions you give it. Give the agent overly broad access to systems and data, and it will use it. That's the same RBAC problem we see with Copilot – just with higher stakes because the agent acts autonomously, rather than answering questions.
Guardrails require that you know what you want to avoid: Anthropic's platform lets you set boundaries for the agent. But you have to define those boundaries yourself. What should the agent never do? Which data should it never process? Who should approve sensitive actions? These aren't answers the platform gives you. They're answers you need to have in an AI policy before you configure the agent.
The AI Act and traceability: The EU AI Act requires businesses to be able to document the use of AI in decision-making processes. An agent that sorts candidates, approves invoices, or responds to customers is participating in decisions. Audit logs and traceability are not optional – they're a legal requirement. Read more about what the AI Act means for Norwegian SMBs.
GDPR and data minimisation: The agent should only have access to the data it needs for the task. Not all of SharePoint. Not all customer records. That's the GDPR principle of data minimisation, and it applies just as fully to agents as to humans.
What this means for Norwegian businesses in one to two years
What we're seeing now is the beginning of a curve. Claude Managed Agents is today primarily available for technical teams. In one to two years, equivalent functionality will be built into Microsoft 365, industry-specific systems, and platforms that Norwegian SMBs already use every day.
Businesses that have done the foundational work – cleaned up their access structure, created an AI policy, established routines for documentation and traceability – will be able to adopt this quickly and in a controlled manner.
Businesses that haven't done this work will either wait until it's too late, or roll out agents without control. Both are costly.
The structure you put in place now isn't just for chatbots and Copilot. It's the infrastructure that makes you ready for what's actually coming.
Read more about what AI Governance is – and why it's a leadership responsibility.
What IT Buddy does about this
We help Norwegian SMBs lay that foundation. Not theory – concrete work on access management, AI policy, and governance frameworks that make it possible to adopt AI agents in a controlled, documented manner and in line with Norwegian regulations.
Whether you're considering chatbots, Copilot, or agents like Claude Managed Agents – the starting point is the same: order in the structure.
Get in touch for a free AI Ready assessment →
Read also: How to Implement AI in Your Business – a Practical Guide for Norwegian SMBs