EU AI Act Explained: Obligations, Risk Tiers, and What to Do Next

A detailed explanation of the EU AI Act’s risk tiers, obligations, and what UK businesses deploying AI need to have in place before 2 August 2026.

By Clausely Team

The four risk tiers

The EU AI Act introduces a tiered regulatory framework for artificial intelligence. Understanding which tier your business falls into is the starting point for knowing what you need to do.

Unacceptable risk — banned outright

Certain uses of AI are prohibited entirely under the Act:

  • AI systems that manipulate people through subliminal techniques.
  • Social scoring systems that evaluate individuals based on behaviour.
  • Real-time biometric identification in publicly accessible spaces.
  • AI that exploits vulnerabilities of specific groups.

When the bans took effect

These bans have been in force since February 2025.

High risk — significant obligations

High-risk AI systems are listed in Annex III of the Act. They include AI used in:

  • Recruitment and CV screening.
  • Credit scoring and insurance risk assessment.
  • Healthcare diagnosis and treatment.
  • Educational assessment.
  • Law enforcement and border control.
  • Access to essential public services.

What high-risk obligations look like

If your business uses AI in any of these areas, your obligations include a Fundamental Rights Impact Assessment, Risk Management Plan, Conformity Self-Assessment, Human Oversight procedures, and technical documentation.

Limited risk — transparency obligations

Businesses using AI chatbots or generating AI content must disclose that AI is involved:

  • Tell users when they are talking to an AI.
  • Label AI-generated content.
  • Disclose when AI has been used to generate synthetic media.

Minimal risk — light touch

Most AI applications fall here — spam filters, AI-powered recommendations, basic automation. An AI Acceptable Use Policy is still recommended as evidence of governance.

What obligations apply to most UK SMEs?

Even if you don’t operate high-risk AI, Article 4 of the Act requires all businesses deploying AI to ensure their staff have sufficient AI literacy. In practice this means:

  • An AI Acceptable Use Policy.
  • AI Literacy training records.
  • Transparency disclosures where AI interacts with customers.
  • A record of the AI systems your business uses and why.

What about businesses using AI in hiring?

Recruitment AI is explicitly listed as high-risk under Annex III. If you use any AI tool to screen CVs, rank candidates, or assist in hiring decisions, you are operating a high-risk AI system regardless of your business size. You need the full High-Risk Ready pack as a minimum.

What’s the next step?

Use our free compliance risk check to identify your tier and your specific obligations in under two minutes at clausely.co.uk/compliance-checker.

If you already know your tier, go straight to the relevant pack — Essentials Pack at £399 (clausely.co.uk/packs/essentials), Professional Pack at £899 (clausely.co.uk/packs/professional), or High-Risk Ready Pack at £2,499 (clausely.co.uk/packs/high-risk).

Recommended next step

Identify your tier in under two minutes.

The free compliance risk check tells you which risk tier you fall into and which pack matches your obligations — Essentials, Professional, or High-Risk Ready.

Check my compliance risk

This article was written with AI assistance and reviewed for accuracy against current UK and EU regulatory guidance. It does not constitute legal advice. If you require specific legal guidance, please consult a qualified solicitor.