The EU AI Act is the world's first comprehensive legal framework for AI — binding law, not a voluntary framework. If your AI system affects people in the EU, it applies to you, regardless of where in the world you're based.
Regulation (EU) 2024/1689 entered into force on 1 August 2024. Unlike NIST RMF — which is voluntary — this is law with teeth. Fines for the most serious violations reach €35 million or 7% of global annual turnover, whichever is higher. It takes a risk-based approach: the more an AI system can harm people, the stricter the rules it faces.
This is the detail that surprises most clients. The Act applies to any organisation — anywhere in the world — whose AI systems are placed on the EU market or used in the EU. If you're an Australian company whose HR screening tool is used by a European subsidiary, the Act applies. This is the EU's "Brussels Effect" in action: one regulation that shapes global practice.
The Act doesn't regulate specific technologies — it regulates the risk a system poses to people. The same underlying model could be minimal risk in one use case and high risk in another. A recommendation engine for films: minimal risk. The same engine used to rank job applicants: high risk. Context is everything.
Providers (those who develop or place AI on the market) carry the heaviest burden. Deployers (those who use AI in their operations) have lighter but real obligations. Critically: if a deployer puts their own brand on a system or makes substantial changes to it, they can become legally responsible as a provider. The obligations follow the risk, not just the job title.
Four tiers. The tier your system sits in determines everything — your obligations, your timeline, and your exposure if something goes wrong. Getting the classification right is the first and most consequential decision.
A large language model used as a creative writing assistant is minimal risk. The same model deployed to make or influence decisions about loan applications is high risk. The AI Act classifies the use case and deployment context — not the underlying technology. This is the single most important concept when advising clients on their AI portfolio.
GPAI models sit outside the four-tier structure — they have their own obligations from August 2025. All GPAI providers must publish technical documentation, comply with copyright law, and disclose training data summaries. GPAI models deemed to pose systemic risk (above 10²⁵ FLOPs training compute threshold) face additional requirements: model evaluations, adversarial testing, incident reporting to the European Commission, and cybersecurity protections.
The obligations on high-risk AI are substantial — and the timeline is tighter than most organisations realise. Conformity assessments, documentation, and risk management systems take 12–18 months to implement properly.
Prohibited practice violations: up to €35 million or 7% of global annual turnover.
High-risk non-compliance: up to €15 million or 3% of global annual turnover.
Providing false information to authorities: up to €7.5 million or 1% of global annual turnover. For large multinationals, these are not theoretical numbers.
Most enterprise clients will ask how the EU AI Act and NIST RMF relate to each other. The short answer: they're complementary, not competing. Understanding both — and how they map — is a genuinely differentiating capability.
Nature
Binding law. Mandatory for in-scope organisations.
Voluntary framework. Organisations choose to adopt it.
Origin
European Union regulation. Extraterritorial reach.
US government framework. No geographic restriction.
Approach
Risk-tier classification. Prescriptive obligations per tier.
Function-based (GOVERN, MAP, MEASURE, MANAGE). Flexible and adaptable.
Focus
Compliance, fundamental rights, market access.
Trustworthiness, risk culture, continuous improvement.
Enforcement
National market surveillance authorities. Fines up to 7% global turnover.
No enforcement mechanism. Accountability is internal.
GPAI / Foundation models
Specific obligations for GPAI models. Systemic risk threshold defined.
Framework principles apply but no GPAI-specific provisions.
Human oversight
Mandatory requirement for high-risk AI. Specific Article 14 obligations.
Core principle throughout all four functions.
Think of it this way: the NIST RMF teaches you how to manage AI risk responsibly — it builds the muscle. The EU AI Act tells you what you're legally required to demonstrate. Organisations that have implemented the NIST RMF properly will find EU AI Act compliance significantly more achievable — because they've already built the documentation, governance structures, and risk processes the Act demands. The RMF is the preparation; the Act is the exam.
The EU AI Act's high-risk obligations map closely to NIST RMF concepts: technical documentation (GOVERN), risk management systems (MAP + MEASURE), human oversight (MANAGE), accuracy and robustness requirements (Valid & Reliable), transparency obligations (Accountable & Transparent), and the Fundamental Rights Impact Assessment (MAP function).
The Act's prohibition on manipulative AI maps to the Fair/Bias and Safe trustworthiness characteristics. Its requirement for data governance maps to NIST's emphasis on data provenance and computational bias management.
The EU AI Act's conformity assessment — a formal process of documenting and proving compliance before placing a high-risk system on the market — has no direct equivalent in the NIST RMF. This is the most operationally demanding element of the Act, and where organisations typically need legal and technical specialist support beyond what a risk framework alone provides.
The vocabulary that signals you understand the Act — not just its headline provisions, but its architecture and intent.
The questions enterprise clients are asking right now — and answers that demonstrate you understand both the law and its business implications.
The EU AI Act is to AI what the CE mark is to products — a baseline of safety and accountability that earns the right to operate in the world's largest single market. The question isn't whether to comply. It's whether you have enough time to do it properly.
Read each question, try to answer mentally, then tap to reveal. Scenario questions mirror real client conversations.
The authoritative sources behind this guide. The EU AI Act is binding law — these are the primary legal texts, not summaries.