The AI Act is not just for big tech
Many have heard of the EU AI Act. Fewer know it is already partially in force – and that it applies to businesses of all sizes.
The most common misconception we encounter: "Surely this only applies to large technology companies?" The answer is no. The AI Act applies to everyone who uses AI in their operations – including businesses with 10, 20, or 50 employees.
This article explains what the AI Act actually means for Norwegian SMBs, without hiding behind legal language.
What is the AI Act?
The EU AI Act (Regulation 2024/1689) is Europe's landmark law on artificial intelligence. It was adopted by the European Parliament in 2024 and is being rolled out in phases until 2027.
Norway is not an EU member, but through the EEA Agreement it is obliged to implement EU regulation. The AI Act will therefore become Norwegian law – it is only a matter of time.
The short version: The AI Act sets requirements for how AI systems are developed, sold, and used in Europe. The higher the risk an AI system poses, the stricter the requirements.
What does this mean for your business, in practice?
As a Norwegian SMB, you are in the vast majority of cases a user of AI – not a developer. This means you buy and deploy AI tools made by others (ChatGPT, Copilot, a recruitment system, an analytics tool).
As a user, you have lighter obligations than vendors, but you are still responsible for three things:
1. Knowing which risk category your AI usage falls into
The AI Act divides AI systems into four categories:
| Category | Examples | Requirements |
|---|---|---|
| Unacceptable risk | Social scoring, hidden manipulation | Prohibited |
| High risk | CV screening, credit decisions, HR monitoring | Strict requirements |
| Limited risk | Chatbots, AI communicating with customers | Transparency requirements |
| Minimal risk | Spam filters, recommendation engines, writing assistants | No specific requirements |
The deciding factor is not which tool you use, but what you use it for.
An example: ChatGPT used to write emails is minimal risk. ChatGPT used to sort job applications and make initial decisions about who advances – that is high risk.
2. Ensuring high-risk AI is used with human oversight
Do you use AI in recruitment, credit assessment, or similar decision-making processes? The AI Act requires that a human is involved in the decision. AI can assist, but cannot decide alone.
3. Verifying that your vendors are compliant
You are responsible for using tools that comply with the regulation. This means you should be asking your vendors about compliance – especially when renewing agreements.
Key deadlines
| Date | What happens |
|---|---|
| February 2025 | Ban on AI systems with unacceptable risk takes effect |
| August 2025 | Requirements for providers of large AI models (GPT-4, Claude, Gemini) |
| August 2026 | Requirements for high-risk AI systems |
| August 2027 | Requirements for certain existing systems |
For most Norwegian SMBs, it is the August 2026 requirements that are most relevant – that is when high-risk AI, including many HR and recruitment tools, must meet strict documentation standards.
It sounds far away. It is not. Building the systems and procedures takes time.
What happens if you do nothing?
The AI Act allows for fines of up to €35 million or 7% of global turnover for the most serious violations. Fines for breaches of user-facing requirements are lower, but not insignificant.
But fines are not the biggest risk for an SMB. The real risks are:
- Reputational damage if it emerges that you are processing personal data unlawfully via AI
- Loss of client contracts because customers and partners are starting to impose compliance requirements
- Audit risk if you cannot document your AI usage
What should you do now?
You do not need a lawyer or a hundred-page consulting report. Start with the simple things:
Step 1: Map
Which AI tools are used in the business, by whom, and for what? Many managers discover employees are using tools they did not know about.
Step 2: Classify
Go through the list and assess: are any of these tools used to make decisions that affect people (recruitment, performance assessment, credit decisions)? That is where the risk sits.
Step 3: Set simple rules
A straightforward AI policy that specifies what is permitted, what is not, and which tools are approved – that is the single most important action you can take.
Step 4: Document
Start logging which AI tools you use and for what purpose. It takes little time to do now, and saves a great deal of time if you are ever audited.
One thing you can do today
Take our free AI Ready assessment. It takes 5–10 minutes, and you receive a concrete report showing where your business stands in relation to the AI Act's requirements – and what should be addressed first.
No commitment. Just a solid starting point.