← Tilbake til blogg
AI Governance 4 min lesetid

ChatGPT at Work: 5 Risks You Need to Know

IT Buddy 24. mars 2026

The invisible AI usage

Ask employees whether they use AI at work and many will say no. Ask them whether they use ChatGPT, Copilot, Grammarly, or DeepL – and the answer is almost always yes.

This is not dishonesty. It is a lack of awareness about what AI actually is. And it creates a problem for the business: you have no visibility into which data is being shared with third parties, which decisions are being supported by AI, or what information is being fed into systems you do not control.

Here are the five risks you need to know.


Risk 1: Leakage of confidential information

It is tempting to paste an email, a contract draft, or an internal report into ChatGPT and ask for help. It is fast and effective. But what happens to that information?

Under OpenAI's standard terms for the free version, conversations may be used to improve the model. This means confidential business information could potentially end up in training data – and in the worst case, appear in responses to other users.

Real-world example: In 2023, Samsung employees leaked internal source code and meeting notes via ChatGPT. Samsung had to ban external AI usage internally.

What you do: Establish clear rules about what information must never be shared with AI tools. Consider enterprise versions with data isolation (ChatGPT Enterprise, Claude Enterprise, Copilot for M365).


Risk 2: GDPR violations when processing personal data

If an employee asks ChatGPT to summarise customer information, write a letter to a named individual, or analyse data containing names and email addresses – that is likely a GDPR violation.

Personal data must not be shared with third-party AI systems without:

  1. A valid legal basis for processing
  2. A data processing agreement with the vendor
  3. Documentation in the processing register

Most businesses have none of these in place for AI tools employees use on their own initiative.

What you do: Review which AI tools are in use and establish data processing agreements where necessary. Train employees on what constitutes personal data.


Risk 3: Errors and "hallucinations" in important decisions

AI models can produce answers that sound convincing but are factually wrong. This is called hallucination. The problem is that it looks professional, is well-phrased – and easy to miss.

If an employee uses ChatGPT to check legal information, interpret a regulation, or evaluate a supplier – and blindly trusts the answer – the consequences can be serious.

Example: A lawyer in the US submitted a legal document containing six fictitious court cases generated by ChatGPT. None of them existed.

What you do: Foster a culture of source criticism. AI is a tool, not an authority. Important decisions must always be verified by a human.


Risk 4: Copyright and ownership of content

Who owns the content AI creates for you? And is the content based on copyright-protected material?

This is a legal grey area that has not yet been resolved in many jurisdictions. Some risks:

  • AI-generated content may resemble existing copyright-protected works
  • In some countries (possibly including Norway), AI-generated content may not be eligible for copyright protection
  • Vendor terms vary regarding who owns the output

What you do: Use AI to assist and inspire, not to copy verbatim. Be particularly careful with code, images, and text you plan to publish or sell.


Risk 5: Dependency and loss of competence

A longer-term risk that is harder to measure: when employees use AI for everything from emails to analysis, they may gradually lose the ability to do the tasks themselves.

This is not merely hypothetical. Studies show that over-reliance on navigation assistance weakens spatial memory. Similar effects have been documented in writing and problem-solving.

For the business, this means:

  • Vulnerability if the AI tool goes down or changes its terms
  • Difficulty quality-checking AI output without independent knowledge
  • Reduced critical thinking over time

What you do: Define which tasks AI should assist with, and which employees should master themselves. Use AI as a stepping stone, not a replacement.


What is the solution?

You do not need to ban AI at work. That will not work anyway – and it would mean throwing away a powerful tool.

What you need is structure:

  1. An AI policy that specifies what is permitted, what is not, and which tools are approved
  2. Training of employees in responsible AI usage
  3. Approved tools with the correct data processing agreements in place
  4. Quality control procedures for AI-generated content

This sounds more like a project than it actually is. Many businesses are up and running in two to four weeks with the right support.

Need help getting started? Take our AI Ready assessment – free, takes 5–10 minutes – and receive a report with concrete recommendations for your business.

Klar for neste steg?

Ta vår AI Ready-kartlegging og finn ut hvor din bedrift står.

Ta AI Ready-kartlegging