A model too powerful to release
On 7 April 2026, Anthropic announced something unusual: a new AI model called Claude Mythos – and simultaneously explained why it won't be made broadly available.
The reason? Mythos is too good at finding security vulnerabilities in critical software. Released freely, it could become a precise tool for attackers looking to compromise the infrastructure that banks, hospitals, power grids, and government systems depend on.
It's the first time an AI company has actively chosen to restrict a model not because it's dangerous in a conventional sense – but because it's too competent.
What is Project Glasswing?
The launch of Mythos happened in the context of Project Glasswing – a collaboration between some of the heaviest players in the technology world:
Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks – with Anthropic as one of the initiators.
The goal is to secure the world's most critical software infrastructure. Not one company's systems. Not one country's networks. The underlying code that billions of people depend on – operating systems, financial systems, health records, energy management.
Glasswing isn't a product. It's a coordinated response to a recognition maturing in the technology industry: AI models have become powerful enough that they can threaten the very infrastructure they're meant to improve.
Mythos in practice – and the controversy
Mythos is in a limited preview. Access is controlled and restricted to selected partners and contexts.
The news picture is complex. According to TechCrunch, Trump administration officials are reportedly encouraging banks to test the model – suggesting that regulators and financial institutions see Mythos as a tool for offensive cybersecurity and vulnerability detection.
At the same time, technology journalists are asking the unavoidable question: is Anthropic limiting Mythos to protect the internet – or to protect itself? A model with these capabilities, distributed only to paying enterprise customers and selected authorities, gives Anthropic a market position no other AI company has.
The answer is probably both. And that isn't necessarily problematic – but it illustrates something important.
The real point: governance didn't keep up
The most interesting thing about Mythos isn't the model itself. It's what it reveals about the state of AI governance globally.
AI development has moved faster than the frameworks meant to govern it. Models have become capable enough that they themselves constitute an infrastructure risk – and neither the technology industry, regulators, nor businesses were prepared.
Project Glasswing is the response: a coalition of the largest players sitting down to solve a problem they all helped create.
For Norwegian businesses, this isn't an abstract security policy topic. It's a direct reminder that AI is not a neutral tool. The more powerful the models become, the more consequential the lack of structure around their use.
We've said it before. We repeat it because it has never been more true:
AI starts not with technology. It starts with structure.
What this means for Norwegian SMBs
Mythos and Project Glasswing are currently far from the everyday reality of a Norwegian accounting firm or a mid-sized hotel. But the signals these announcements send are relevant now.
AI capabilities are accelerating. What today is restricted to cybersecurity partners in the Glasswing coalition will in a few years be available to businesses running on Microsoft 365 and AWS. The question isn't whether – it's when, and whether you'll be ready.
Compliance requirements are tightening. When AI models are powerful enough to threaten critical infrastructure, regulators respond with stricter requirements. The EU AI Act is already applicable law. New risk classifications and documentation requirements will follow. Businesses with the foundation in place adapt quickly. Those without it will struggle.
Security and AI governance are two sides of the same coin. The Glasswing partnership is about protecting software against AI-driven attacks. But using AI responsibly in your business – with RBAC, AI policy, and traceability – is the other side of that same coin. You're not just protecting your own systems. You're part of a larger infrastructure.
Calm amid the storm
It's easy to feel overwhelmed by the pace of AI development. A new model announced, a new restriction imposed, a new coalition formed – all within a single week.
That's understandable. But it isn't useful.
What is useful is understanding what you can actually control: your data, your access structure, your guidelines. It's not exciting material. It's the work that means you can adopt new technology safely – whatever it's called, whoever releases it.
IT Buddy helps you with exactly that. Not because we think we can stop the development – but because we believe Norwegian businesses deserve to keep up in a way they can trust.
Get in touch for a free AI Ready assessment →
Read also: Claude Managed Agents: Anthropic Makes AI Agents Production-Ready