← Back to blog
AI Governance 7 min read

Claude Cowork: When the AI Agent Moves to Your Desktop — And What Norwegian SMBs Must Structure First

Uros Vujic 12. mai 2026

From chatbot to colleague

In January 2026, Anthropic launched something new: Claude Cowork. First as a quiet research preview for Mac. By April, it was generally available on both Mac and Windows — and included in every paid Claude plan.

It sounds like an update. It isn't.

Cowork is the first real step away from "AI as chatbot" and into something else: an agent that actually does things on your machine. It reads files. It edits documents. It navigates Google Drive. It sends emails on your behalf. It flags contradictory clauses in DocuSign contracts.

Anthropic was direct when they announced it:

"It might be the first time you're using a more advanced tool that moves beyond a simple conversation."

For Norwegian SMBs, this is where it gets interesting — and where structure starts to matter.


What Cowork actually is

Claude Cowork is an autonomous assistant that lives on your machine. You give it a goal, and it works against your local files, folders, and connected services to deliver a finished result.

Anthropic is explicit that the target audience is non-technical: analysts, legal professionals, finance teams, project managers, operations. People who live in Excel, Word, Gmail, and Drive — not in the terminal.

What Cowork does in practice:

  • Reads and edits files in a designated folder on your machine
  • Connects to Google Drive, Gmail, DocuSign, and a range of other services via "Deep Connectors"
  • Builds reports by pulling data from multiple sources at once
  • Drafts emails based on project data
  • Compares contracts and flags inconsistencies
  • Builds PowerPoint presentations using Excel data

Since May 2026, it also includes ten pre-built "Finance Agents" — templates for KYC screening, month-end close, pitchbook building, earnings review, and more. Ready to roll out across Excel, Word, PowerPoint, and Outlook.

This matters. For the first time, a frontier model isn't just a conversational partner — it's a colleague that touches the same material as people in your business.


Why this is a governance question, not an IT question

Let's be concrete.

An analyst at a Norwegian consulting firm gives Cowork access to her project folder on OneDrive and a client folder on Google Drive. She asks the agent: "Draft the final report based on the project documents, and send it to the client when it's ready."

What just happened in that one sentence?

  • Cowork got read access to every document in the project folder — including ones she hadn't thought about
  • It connected to the client's Drive — which may contain data her employer formally shouldn't hold
  • It wrote a draft that included quoted passages from an unpublished internal memo
  • It sent the email to the contact it found at the top of the client folder — not necessarily the right recipient

None of this is Cowork's "fault". It's the absence of structure before the agent was let loose.

Anthropic knows this. They explicitly highlight two risks: prompt injection (where the agent is manipulated by content in the files it reads) and unintended file deletion or modification. Their design principle is human-in-the-loop: the agent performs tasks, but consequential decisions stay with the user.

But what counts as "consequential" isn't defined in Cowork. It has to be defined in your business. That's governance.


The four things that need to be in place

When an SMB is about to roll out Cowork — or any agent that touches files and services — four things need to be settled before the agent gets access:

1. RBAC (role-based access control). Who can give an agent access to which files? Should a consultant be able to connect Cowork to the client's entire Drive — or only to the project folder? Should HR staff be able to give the agent access to personnel folders? These questions need answers before the agent is installed.

2. DPIA-light (data protection impact assessment). Cowork sends data to Anthropic's models when it works. For Norwegian businesses under GDPR, that means each use case must be assessed: what personal data is processed, where it's stored, who has access, and what the legal basis is. You don't need a 40-page DPIA for every scenario — but you need a structured assessment.

3. AI policy for employees. What's okay to do with Cowork, and what isn't? Should the agent be able to send emails on your behalf? Should it be able to sign things in DocuSign? Should it be allowed to analyse client data outside the agreed scope? Without a written policy, boundaries get set ad hoc — and that's where things go wrong.

4. Traceability. What has the agent done, and when? Since April 2026, Anthropic has provided Cowork with OpenTelemetry observability on enterprise plans — which means logging and tracing are technically possible. But it has to be enabled, and the logs have to be reviewed regularly.

This is what IT Buddy means when we say structure. It isn't an abstract principle. It's four concrete elements that have to exist on paper, in the systems, and in the mind of every employee — before the agent is active.


Microsoft's answer: Copilot Cowork

In March 2026, Microsoft launched its own product: Copilot Cowork, built on the same stack as Anthropic. Delivered as part of the M365 Frontier program and as a new "E7" license.

The difference is where the agent runs. Claude Cowork runs in Anthropic's infrastructure. Copilot Cowork runs in your own M365 tenant under Microsoft's enterprise data protection.

For Norwegian SMBs that already run M365, this is a meaningful distinction. Data sovereignty, existing data processing agreements, certifications Microsoft already holds — all of it counts. But the underlying risk is the same: an agent that touches files and services requires structure, regardless of who hosts it.

The choice between Claude Cowork and Copilot Cowork is a strategic decision — not a technical preference.


The real question

It's tempting to see Cowork as "Claude with superpowers". That's the wrong way to look at it.

Cowork isn't a better Claude. It's a new kind of software. An agent that touches your environment doesn't have the same properties as a chatbot — it doesn't carry the same risks, and it doesn't require the same frameworks around it.

The question for your SMB isn't "should we adopt Cowork?". It's:

  • Do we know which data an agent can touch, and which it shouldn't be able to touch?
  • Do we have a written policy for what employees can and cannot ask the agent to do?
  • Do we know how to reverse it if the agent does something wrong?
  • Do we have a log of what the agent has actually done?

If the answer is yes to all four — go ahead. Cowork can save tens of thousands of hours and cost far less than the alternatives.

If the answer is no — or "I'm not sure" — the problem isn't Cowork. It's the missing structure.


What IT Buddy does

We help Norwegian SMBs put exactly this structure in place before agents are rolled out. Concretely, that means:

  • Mini governance report mapping which data and services exist, and where the risk sits
  • RBAC setup defining who can do what — including when it's an agent asking
  • DPIA-light for the most relevant use cases
  • AI policy tailored to your business, not a template pulled off the internet

It's not exciting material. But it's what lets you enter the Cowork era without waking up one morning to find that the agent sent a confidential report to the wrong client.

AI starts not with technology. It starts with structure.

Get in touch for a free AI Ready assessment →


Read also: Claude Managed Agents: Anthropic Makes AI Agents Production-Ready

UV

Uros Vujic

Daglig leder, IT Buddy AS

Uros hjelper norske SMB-er med å innføre AI på en kontrollert og bærekraftig måte. Bakgrunn fra IT-infrastruktur i bank og finans, med spesialisering i AI governance, RBAC og GDPR-compliant implementering.

Ready for the next step?

Take our AI Ready assessment and find out where your business stands.

Take AI Ready Assessment