What is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive law on artificial intelligence. It was adopted by the European Parliament in March 2024, entered into force in August 2024, and is being rolled out in phases until 2027.
Norway is not an EU member, but through the EEA Agreement it is obliged to implement EU regulation – including the AI Act. It is therefore a question of when, not if.
A risk-based approach
The AI Act classifies AI systems into four risk levels:
Unacceptable risk – prohibited
Systems that evaluate social behaviour ("social scoring"), manipulate people unconsciously, or use biometric surveillance in public spaces without specific authorisation. These are prohibited from February 2025.
High risk – strict requirements
Systems used in critical decisions:
- Recruitment and HR (CV screening, performance monitoring)
- Credit and insurance
- Education
- Critical infrastructure
- Law enforcement
High-risk systems require risk assessment, technical documentation, human oversight, and logging.
Limited risk – transparency requirements
Chatbots and AI that interact with people must disclose that the user is talking to an AI.
Minimal risk – no requirements
Spam filters, recommendation systems, AI in games. No specific requirements.
Timeline
| Date | What comes into force |
|---|---|
| February 2025 | Ban on unacceptable risk systems |
| August 2025 | Requirements for general-purpose AI models (GPAI) |
| August 2026 | Requirements for high-risk systems |
| August 2027 | Requirements for certain existing systems |
In practice: Most SMBs have time to prepare, but preparation should start now.
Who does it apply to?
The AI Act applies to everyone who develops, deploys, or uses AI systems in the EU/EEA – regardless of size.
This means:
- Providers of AI systems sold in the EU
- Importers and distributors of AI products
- Users (businesses and organisations) that deploy AI
As a Norwegian SMB, you are typically a user – you buy and use AI tools from others. This means lighter obligations than for vendors, but you are still responsible for:
- Verifying that the tools you use comply with the regulation
- Ensuring high-risk AI is used with adequate human oversight
- Documenting AI usage where required
What are GPAI models and why does it matter?
General-purpose AI models (GPAI) are models that can be used for many tasks – such as GPT-4, Claude, Gemini. From August 2025, providers of such models must meet requirements for:
- Technical documentation
- Compliance with copyright law
- Transparent communication to business customers
For you as a user of these tools, this means providers (OpenAI, Anthropic, Google) must deliver clear documentation. Check that your provider is prepared.
The most common mistakes we already see
1. "We are too small for this to apply to us"
The AI Act does not differentiate by size. A ten-person business using AI in recruitment is subject to the same high-risk requirements as a large corporation.
2. "We only use ChatGPT – that isn't risky"
It depends on what you use it for. If it is used to screen job applications, that is high risk. If it is used to draft emails, that is minimal risk.
3. "We will wait until the regulation is implemented in Norway"
Norwegian implementation may come quickly. And EU customers and partners will already be demanding compliance.
What should Norwegian SMBs do now?
Step 1: Map your AI usage
Which AI tools are being used, by whom, and for what purpose? You cannot manage what you do not know.
Step 2: Classify the risk
For each tool and use case: which risk category does it fall into? High, limited, or minimal?
Step 3: Document
For high-risk usage: establish procedures, log decisions, and ensure human oversight.
Step 4: Review your vendors
Are the tools you use becoming compliant? Make sure your vendor agreements address this.
Step 5: Build an AI policy
A simple, clear internal policy for AI usage is the most concrete thing you can do right now.
Summary
The EU AI Act is not a future threat – it is already partially in force. For Norwegian SMBs, the most important first step is understanding which AI systems you use and which risk category they fall into.
Need help mapping and classifying AI usage in your business? That is exactly what IT Buddy does. Contact us for a no-obligation conversation.