← Portfolio Steven Picton · AI Knowledge
Regulation · European Union

EU Artificial Intelligence Act

Regulation (EU) 2024/1689 · In force August 2024 5 sections · Interactive

The EU AI Act is the world's first comprehensive legal framework for AI — binding law, not a voluntary framework. If your AI system affects people in the EU, it applies to you, regardless of where in the world you're based.

What it is

The world's first binding AI law

Regulation (EU) 2024/1689 entered into force on 1 August 2024. Unlike NIST RMF — which is voluntary — this is law with teeth. Fines for the most serious violations reach €35 million or 7% of global annual turnover, whichever is higher. It takes a risk-based approach: the more an AI system can harm people, the stricter the rules it faces.

Extraterritorial reach

It applies beyond Europe's borders

This is the detail that surprises most clients. The Act applies to any organisation — anywhere in the world — whose AI systems are placed on the EU market or used in the EU. If you're an Australian company whose HR screening tool is used by a European subsidiary, the Act applies. This is the EU's "Brussels Effect" in action: one regulation that shapes global practice.

Core logic

Risk-based, not technology-based

The Act doesn't regulate specific technologies — it regulates the risk a system poses to people. The same underlying model could be minimal risk in one use case and high risk in another. A recommendation engine for films: minimal risk. The same engine used to rank job applicants: high risk. Context is everything.

Who it targets

Providers and deployers — both have obligations

Providers (those who develop or place AI on the market) carry the heaviest burden. Deployers (those who use AI in their operations) have lighter but real obligations. Critically: if a deployer puts their own brand on a system or makes substantial changes to it, they can become legally responsible as a provider. The obligations follow the risk, not just the job title.

Four tiers. The tier your system sits in determines everything — your obligations, your timeline, and your exposure if something goes wrong. Getting the classification right is the first and most consequential decision.

Unacceptable risk
Banned outright
Eight practices prohibited from 2 February 2025. These are considered incompatible with EU values and fundamental rights. Prohibited uses include: subliminal or manipulative techniques that distort behaviour or impair decision-making; exploiting vulnerabilities due to age, disability, or socioeconomic situation; social scoring by public authorities; real-time remote biometric identification in public spaces (with narrow law enforcement exceptions); AI that infers emotions in workplaces or education; building facial recognition databases through untargeted scraping; profiling individuals to predict criminal behaviour based solely on personality traits.
High risk
Strictly regulated
Subject to mandatory requirements before market placement. Two categories: AI embedded in products already covered by EU safety legislation (medical devices, vehicles, machinery); and AI listed in Annex III — covering critical infrastructure, education, employment, essential services (credit, insurance), law enforcement, migration, and administration of justice. Key examples: CV screening tools, credit scoring, medical diagnosis AI, biometric categorisation systems. Must undergo conformity assessment, maintain technical documentation, register on an EU public database, and enable human oversight.
Limited risk
Transparency obligations
Lighter obligations focused on disclosure. Users must be told they are interacting with AI — chatbots must identify themselves as AI. Providers of generative AI must ensure AI-generated content is identifiable. Deepfakes and AI-generated content intended to inform the public must be clearly labelled. The principle: people have a right to know when AI is involved in what they're seeing or who they're talking to.
Minimal risk
No mandatory obligations
The majority of AI systems today. Spam filters, AI in video games, recommendation systems for entertainment, most productivity tools. No mandatory requirements — though voluntary codes of conduct are encouraged. The Act was deliberately designed so that most AI remains unregulated: the burden falls where the harm potential is highest.
Classification trap

The same model, different tiers

A large language model used as a creative writing assistant is minimal risk. The same model deployed to make or influence decisions about loan applications is high risk. The AI Act classifies the use case and deployment context — not the underlying technology. This is the single most important concept when advising clients on their AI portfolio.

GPAI — a separate category

General Purpose AI models (like GPT-4, Claude, Gemini)

GPAI models sit outside the four-tier structure — they have their own obligations from August 2025. All GPAI providers must publish technical documentation, comply with copyright law, and disclose training data summaries. GPAI models deemed to pose systemic risk (above 10²⁵ FLOPs training compute threshold) face additional requirements: model evaluations, adversarial testing, incident reporting to the European Commission, and cybersecurity protections.

The obligations on high-risk AI are substantial — and the timeline is tighter than most organisations realise. Conformity assessments, documentation, and risk management systems take 12–18 months to implement properly.

High-risk AI — provider obligations
Providers must
  • Implement a quality management system
  • Create and maintain technical documentation
  • Conduct a conformity assessment
  • Register the system on the EU public database
  • Affix CE marking (for relevant product categories)
  • Design for appropriate human oversight
  • Ensure accuracy, robustness, and cybersecurity
  • Log system activity automatically
  • Provide instructions for deployers
  • Report serious incidents to authorities
  • Appoint an EU representative if based outside the EU
Deployers must
  • Use systems according to provider instructions
  • Assign human oversight to competent people
  • Conduct a Fundamental Rights Impact Assessment
  • Monitor performance and report issues to providers
  • Inform employees when AI is used to monitor them
  • Keep logs for 6 months minimum
  • Ensure data used is relevant and representative
Deployers become providers if they
  • Place the system on the market under their own brand
  • Make substantial modifications to the system
Enforcement timeline
1 August 2024
Act enters into force
Regulation (EU) 2024/1689 published and live. The clock starts.
2 February 2025
Prohibited practices banned · AI literacy obligations begin
All eight prohibited AI practices are now illegal. Organisations must have already removed or redesigned any systems in this category. Workforce AI literacy requirements also kick in.
2 August 2025
GPAI model obligations apply · Governance rules active
General Purpose AI model providers must have technical documentation, copyright policies, and training data summaries in place. Systemic risk GPAI models face additional requirements. EU AI governance structures fully operational.
Now → August 2026
Compliance window for high-risk AI (Annex III)
Organisations deploying high-risk AI systems need to be building compliance programmes now. Conformity assessments, technical documentation, and risk management systems take 12–18 months to implement properly. August 2026 is the main enforcement deadline.
2 August 2026
Full enforcement — Annex III high-risk systems
The primary compliance deadline for most organisations. All Annex III high-risk AI systems must be fully compliant or face enforcement action.
2 August 2027
Extended deadline — Annex I embedded systems
High-risk AI embedded in regulated products (medical devices, vehicles, machinery) and large-scale legacy IT systems get an additional year due to the complexity of conformity with existing sector legislation.
Penalties

The fines are designed to sting

Prohibited practice violations: up to €35 million or 7% of global annual turnover.

High-risk non-compliance: up to €15 million or 3% of global annual turnover.

Providing false information to authorities: up to €7.5 million or 1% of global annual turnover. For large multinationals, these are not theoretical numbers.

Most enterprise clients will ask how the EU AI Act and NIST RMF relate to each other. The short answer: they're complementary, not competing. Understanding both — and how they map — is a genuinely differentiating capability.

Dimension
EU AI Act
NIST AI RMF

Nature

Binding law. Mandatory for in-scope organisations.

Voluntary framework. Organisations choose to adopt it.

Origin

European Union regulation. Extraterritorial reach.

US government framework. No geographic restriction.

Approach

Risk-tier classification. Prescriptive obligations per tier.

Function-based (GOVERN, MAP, MEASURE, MANAGE). Flexible and adaptable.

Focus

Compliance, fundamental rights, market access.

Trustworthiness, risk culture, continuous improvement.

Enforcement

National market surveillance authorities. Fines up to 7% global turnover.

No enforcement mechanism. Accountability is internal.

GPAI / Foundation models

Specific obligations for GPAI models. Systemic risk threshold defined.

Framework principles apply but no GPAI-specific provisions.

Human oversight

Mandatory requirement for high-risk AI. Specific Article 14 obligations.

Core principle throughout all four functions.

The strategic framing

NIST RMF is the practice. EU AI Act is the obligation.

Think of it this way: the NIST RMF teaches you how to manage AI risk responsibly — it builds the muscle. The EU AI Act tells you what you're legally required to demonstrate. Organisations that have implemented the NIST RMF properly will find EU AI Act compliance significantly more achievable — because they've already built the documentation, governance structures, and risk processes the Act demands. The RMF is the preparation; the Act is the exam.

Where they overlap most

Shared ground between the two frameworks

The EU AI Act's high-risk obligations map closely to NIST RMF concepts: technical documentation (GOVERN), risk management systems (MAP + MEASURE), human oversight (MANAGE), accuracy and robustness requirements (Valid & Reliable), transparency obligations (Accountable & Transparent), and the Fundamental Rights Impact Assessment (MAP function).

The Act's prohibition on manipulative AI maps to the Fair/Bias and Safe trustworthiness characteristics. Its requirement for data governance maps to NIST's emphasis on data provenance and computational bias management.

Key difference to flag

Conformity assessment has no NIST equivalent

The EU AI Act's conformity assessment — a formal process of documenting and proving compliance before placing a high-risk system on the market — has no direct equivalent in the NIST RMF. This is the most operationally demanding element of the Act, and where organisations typically need legal and technical specialist support beyond what a risk framework alone provides.

The vocabulary that signals you understand the Act — not just its headline provisions, but its architecture and intent.

Core legal terms
Provider
Develops or places AI on the market
Any natural or legal person who develops an AI system or has one developed, and places it on the market or puts it into service under their own name or trademark. Carries the heaviest obligations under the Act — especially for high-risk systems.
Deployer
Uses AI in their operations
Any natural or legal person — except end users — who uses an AI system under their own authority in a professional capacity. Has lighter obligations than providers, but still faces requirements including fundamental rights impact assessments and human oversight for high-risk systems.
Conformity Assessment
The compliance proof process
The formal process by which a provider demonstrates that a high-risk AI system meets the Act's requirements before placing it on the market. Results in technical documentation and, where required, third-party certification. The most resource-intensive compliance obligation in the Act.
CE Marking
The conformity symbol
The marking that indicates a product complies with EU requirements. For high-risk AI systems embedded in regulated products, the CE mark signals the AI component also meets the Act's requirements. Familiar from other EU product regulation (medical devices, toys, machinery).
Annex III
The high-risk use case list
The list in the Act that defines which AI applications are automatically classified as high-risk. Covers 8 domains: critical infrastructure, education, employment (including CV screening and performance evaluation), essential private and public services, law enforcement, migration and border control, administration of justice, and democratic processes.
GPAI Model
General Purpose AI
An AI model trained on large amounts of data using self-supervision at scale, that displays significant generality and can competently perform a wide range of tasks. GPT-4, Claude, Gemini are examples. Has its own obligation tier separate from the four risk categories, in force from August 2025.
Systemic Risk (GPAI)
Threshold for enhanced GPAI obligations
GPAI models are considered to pose systemic risk if trained using more than 10²⁵ floating point operations (FLOPs). These models face the highest obligations: model evaluations, adversarial testing ("red-teaming"), incident reporting to the European Commission, and cybersecurity requirements.
Fundamental Rights Impact Assessment
Required for deployers of high-risk AI
A structured assessment that deployers of high-risk AI must conduct before deployment. Evaluates the potential impact on fundamental rights — including privacy, non-discrimination, dignity, and access to justice. Must be documented and made available to authorities on request.
Technical Documentation
The compliance paper trail
Detailed documentation that providers of high-risk AI must maintain, covering: system description and intended purpose, design and development process, training data and methodology, testing and validation results, performance metrics, risk management measures, and instructions for deployers. Must be kept up to date and provided to authorities on request.
Brussels Effect
Why this matters outside Europe
The tendency of EU regulation to become a de facto global standard because multinationals find it more efficient to comply with the strictest regime globally rather than maintaining different practices per market. The EU AI Act is expected to drive global AI governance norms in the same way GDPR shaped global data protection practices.

The questions enterprise clients are asking right now — and answers that demonstrate you understand both the law and its business implications.

When they ask: "Does the EU AI Act apply to us? We're not a European company."
Almost certainly yes, if any of your AI systems are used by or affect people in the EU — regardless of where you're headquartered. This is the Brussels Effect: the EU doesn't regulate companies, it regulates markets. If your system touches the EU market, you're in scope. We've seen this pattern before with GDPR — organisations assumed it didn't apply to them, then scrambled when it did. The time to determine your exposure is now, not after enforcement starts.
When they ask: "We use AI for HR — is that high risk?"
If you're using AI to screen CVs, rank candidates, evaluate performance, or make promotion decisions — yes, that's Annex III high-risk. Employment and workforce management is one of the eight domains explicitly listed. That means conformity assessments, technical documentation, human oversight obligations, and registration on the EU public database before the August 2026 deadline. The good news is that with the right process, these are achievable — but 12–18 months is a realistic implementation timeline, so you need to start now.
When they ask: "We use ChatGPT / Copilot / Claude — does that make us subject to the Act?"
As a deployer using a third-party GPAI model, your obligations depend on how you're using it. Using it as a productivity tool with no automated decision-making — probably limited or minimal risk obligations (mainly transparency). Embedding it into a system that makes decisions affecting people's rights or safety — you need to assess whether that pushes you into high-risk territory. The provider (OpenAI, Microsoft, Anthropic) handles GPAI-level obligations; your obligations as deployer depend on your specific use case.
When they ask: "How does this relate to GDPR?"
They're complementary, not overlapping. GDPR governs personal data — how you collect, process, and store it. The EU AI Act governs AI systems — how they're built, deployed, and overseen. They interact wherever AI processes personal data (which is often), but they're separate obligations with separate enforcement. Many of the GDPR compliance muscles your organisation has built — data mapping, impact assessments, documentation — are directly applicable to EU AI Act compliance. Think of the AI Act as adding a new dimension to a compliance programme that already exists.
When they ask: "We already follow NIST RMF — are we compliant?"
NIST RMF gives you the foundations — risk management processes, governance structures, documentation culture — that make AI Act compliance significantly more achievable. But it doesn't get you there by itself. The Act adds specific legal obligations that the RMF doesn't cover: conformity assessments, CE marking, the EU public database registration, mandatory incident reporting, and the Fundamental Rights Impact Assessment. Think of NIST as the practice that prepares you for the exam — but you still have to sit the exam.
When they ask: "What should we be doing right now?"
Three things immediately. First, audit your AI portfolio — map every system to a risk tier. Don't assume anything is minimal risk without checking against Annex III. Second, prioritise any system that touches employment, credit, essential services, or law enforcement — those are your Annex III systems and August 2026 is your hard deadline. Third, if you're using or deploying GPAI models, check whether your use cases create high-risk obligations that your provider doesn't cover. The organisations that move now will spend 12 months building compliance properly. Those that wait will be scrambling in 2026.

The one-liner that lands

How to frame it simply

The EU AI Act is to AI what the CE mark is to products — a baseline of safety and accountability that earns the right to operate in the world's largest single market. The question isn't whether to comply. It's whether you have enough time to do it properly.

Read each question, try to answer mentally, then tap to reveal. Scenario questions mirror real client conversations.

What are the four risk tiers in the EU AI Act — and what obligation does each carry?
Unacceptable risk — banned outright. Eight practices prohibited from February 2025, including social scoring and most real-time biometric identification.

High risk — strictly regulated. Must pass conformity assessment, maintain technical documentation, register on EU public database, enable human oversight.

Limited risk — transparency obligations only. Chatbots must disclose they're AI. Deepfakes must be labelled.

Minimal risk — no mandatory obligations. Spam filters, video game AI, most productivity tools.
What is "Annex III" — and why does it matter for enterprise clients?
Annex III is the list of use cases automatically classified as high-risk. Eight domains: critical infrastructure, education, employment, essential private/public services, law enforcement, migration, administration of justice, democratic processes. Employment is the one that catches most enterprise clients — any AI used for CV screening, performance evaluation, or promotion decisions is Annex III high-risk. This means conformity assessment, technical documentation, human oversight, and registration on the EU public database — all before the August 2026 deadline.
What is the difference between a "provider" and a "deployer" under the Act — and why does the distinction matter?
Provider — develops or places an AI system on the market under their own name. Carries the heaviest obligations. Must conduct conformity assessment, maintain technical documentation, register the system, and appoint an EU representative if based outside the EU.

Deployer — uses an AI system in their own operations. Lighter obligations but still real: must conduct a Fundamental Rights Impact Assessment for high-risk systems, assign human oversight, and keep logs.

The critical nuance: a deployer who puts their own brand on a system or makes substantial modifications becomes a provider — with all the associated obligations. Many enterprise clients don't realise this until it's pointed out.
When did the ban on prohibited AI practices take effect — and what are two examples of banned uses?
The ban took effect on 2 February 2025. Examples of banned practices: social scoring by public authorities (classifying people based on behaviour or socioeconomic status); subliminal or manipulative techniques that distort behaviour without conscious awareness; real-time remote biometric identification in public spaces (with narrow law enforcement exceptions); building facial recognition databases through untargeted scraping; inferring emotions in workplace or educational settings.
A US-headquartered company says: "We don't operate in Europe, so the EU AI Act doesn't apply to us." How do you respond?
They use an AI hiring tool deployed by their European subsidiary. Their customer-facing recommendation engine is used by EU consumers.
The Act applies to any organisation whose AI systems are placed on the EU market or used in the EU — regardless of where the company is headquartered. This is the Brussels Effect: the EU regulates markets, not companies. If their hiring tool screens European applicants, if their recommendation engine affects EU consumers, if their credit scoring model is used by a European subsidiary — they are in scope. The question to ask them is not "do we operate in Europe?" but "does any output from our AI systems affect people in the EU?"
A client says their HR team uses ChatGPT to shortlist CVs. What are their EU AI Act obligations?
Using a GPAI model (ChatGPT) in an employment context that influences hiring decisions.
This is the scenario that surprises clients most. As a deployer using a GPAI model for a high-risk use case (employment, Annex III), the client has real obligations: they must conduct a Fundamental Rights Impact Assessment, assign human oversight, maintain logs for six months, and inform employees that AI is being used to influence decisions about them. OpenAI handles GPAI-level obligations as the provider — but the deployer's obligations around the specific high-risk use case are entirely the client's responsibility. This is not a vendor problem to outsource.
A client asks: "We already comply with GDPR — does that cover our EU AI Act obligations?" What do you say?
GDPR compliance is real but doesn't address AI-specific obligations.
GDPR governs personal data — how it's collected, processed, and stored. The EU AI Act governs AI systems — how they're built, deployed, and overseen. They're separate obligations that overlap where AI processes personal data (which is often). Many GDPR compliance muscles — data mapping, impact assessments, documentation — are directly applicable to AI Act compliance. But GDPR doesn't require conformity assessments, technical documentation of AI systems, human oversight obligations for high-risk AI, or registration on the EU public database. Think of GDPR as a necessary but insufficient foundation for EU AI Act compliance.

The authoritative sources behind this guide. The EU AI Act is binding law — these are the primary legal texts, not summaries.

Primary sources
Official guidance & implementation
Further reading