An AI Impact Assessment is how you prove — to regulators, clients, and your own board — that you've thought seriously about what your AI system could do to people before you deploy it. It's the bridge between governance principles and operational reality.
An AI Impact Assessment (AIIA) is a systematic evaluation of an AI system's potential effects — positive and negative — on individuals, groups, organisations, and society. It identifies who is affected, how they could be harmed, how likely and severe those harms are, and what mitigations are in place. Think of it as a due diligence exercise that creates a documented, auditable record of responsible decision-making.
The EU AI Act requires a Fundamental Rights Impact Assessment for deployers of high-risk AI systems. ISO/IEC 42005:2025 — published in 2025 — provides the internationally recognised standard methodology. ISO/IEC 42001 (the AI management system standard) requires organisations to assess AI consequences as part of their management system. Beyond compliance, organisations that conduct systematic impact assessments experience significantly fewer AI-related incidents than those that don't.
Clients increasingly ask for evidence of impact assessment as part of procurement. Having a documented, rigorous process is fast becoming a competitive differentiator.
Risk assessment focuses on the likelihood and severity of potential negative events — it sets strategies for mitigation. Impact assessment focuses on the foreseeable effects on people and society — it identifies who is affected and how. You need both. ISO/IEC 42005 specifically clarifies this distinction: impact assessments examine societal and individual consequences; risk assessments examine organisational exposure. In practice, they're conducted together but structured separately in documentation.
The most common mistake is treating an impact assessment as a pre-launch checklist. ISO/IEC 42005 is explicit: assessments should be integrated throughout the AI lifecycle — from design and development through deployment and post-market monitoring. An assessment done at design stage can influence architecture decisions. An assessment done the week before launch is mostly documentation of decisions already made, with little ability to change outcomes.
A rigorous AI impact assessment follows a structured sequence. Each step builds on the last — skip one and the whole assessment is weakened. The process is iterative, not linear: findings at later steps will send you back to earlier ones.
Start by precisely documenting what the AI system is, what it does, its intended purpose, and where it sits in the deployment lifecycle. Include: the type of AI (classification, generation, recommendation, prediction), the input data it uses, the outputs it produces, and who receives those outputs. Also document what the system explicitly does not do — boundary conditions matter as much as intended functions. This scoping document becomes the foundation everything else is built on.
Go beyond the obvious users. An AI hiring tool's stakeholders aren't just the hiring managers using it — they include job applicants (especially those from historically underrepresented groups), employees whose data trains it, and communities where hiring concentrates economic opportunity. Identify groups who benefit, groups who could be harmed, and groups who might be affected without knowing the system exists. Pay particular attention to vulnerable groups: those with less power to challenge or appeal decisions made about them.
This is the analytical core of the assessment. For each stakeholder group, evaluate potential harms across multiple dimensions: physical, psychological, financial, reputational, and societal. Critically — look beyond intended uses. Anticipate misuse, over-reliance, and unintended applications. Consider failure modes: what happens when the system is wrong? What happens when it's right but the context has changed? Consider second-order effects: how might this system change behaviour, power dynamics, or social norms over time?
Not all harms are equal. A useful severity framework evaluates four variables for each identified harm. Scale — how many people could be affected? Scope — what proportion of the affected group is impacted? Likelihood — how probable is this harm given the system's design and deployment context? Reversibility — can affected individuals recover, or is the harm permanent? The combination of these variables produces a severity score that drives prioritisation. The HUDERIA methodology (Council of Europe) uses exactly this framework.
For each harm that exceeds your risk tolerance threshold, define specific mitigations. These fall into four types: technical controls (bias detection, model constraints, output filtering), procedural safeguards (human review requirements, escalation protocols, override mechanisms), governance mechanisms (accountability assignments, audit schedules, incident reporting), and transparency measures (disclosure to affected individuals, documentation for regulators). Each mitigation must have a named owner, a timeline, and a measurable outcome. Mitigations without owners don't get implemented.
An impact assessment is not a one-time document — it's a living record. Establish monitoring mechanisms that track real-world performance against the harms identified. Define clear triggers for reassessment: when the system is substantially modified, when it's deployed in a new context, when incident rates change, or when the regulatory environment shifts. The EU AI Act requires conformity assessments to be updated whenever a system undergoes substantial modification. Build feedback channels so users and affected individuals can report concerns — this is often your earliest signal that something is going wrong.
Every stage of an impact assessment — processes followed, decisions made, mitigations applied — must be documented to provide a clear and auditable record. This isn't bureaucracy: it's the mechanism by which accountability is established, trust is built with regulators, and institutional knowledge is preserved when team members change. The EU AI Act requires technical documentation to be available to authorities on request. ISO/IEC 42005 specifies what that documentation must contain.
Identifying harms is the hardest and most important part of an impact assessment. It requires moving beyond obvious technical failures to understand the full human and societal consequences of an AI system's outputs.
Injury, illness, or death resulting from AI-driven decisions or actions. Highest severity — always the first priority.
Distress, discrimination, loss of dignity, or erosion of autonomy caused by how an AI system treats people.
Economic loss, denial of opportunity, or unfair treatment in financial decisions caused by AI outputs.
Damage to individuals' standing, relationships, or professional prospects caused by incorrect or unfair AI outputs.
Systemic effects on communities, democratic processes, or social structures that extend beyond individual cases.
Resource consumption, emissions, and ecological impacts from AI systems — often overlooked in impact assessments.
| Variable | Low | Medium | High |
|---|---|---|---|
| Scale How many affected? |
Individual or small group | Significant subpopulation | Large population or systemic |
| Scope What proportion? |
Small fraction of group affected | Meaningful minority affected | Majority or entire group affected |
| Likelihood How probable? |
Unlikely given design & controls | Possible under foreseeable conditions | Probable or already occurring |
| Reversibility Can harm be undone? |
Fully reversible with intervention | Partially reversible over time | Permanent or very difficult to remedy |
One of the most consistent findings in AI impact assessments is that harm frequently comes not from the intended use but from unintended applications, misuse, or over-reliance. A system designed to assist doctors may be used to replace clinical judgement entirely. A tool built for fraud detection may be weaponised against whistleblowers. Assessments that only consider the intended use case are systematically incomplete. ISO/IEC 42005 explicitly requires organisations to anticipate foreseeable misuse scenarios alongside intended use.
Stakeholder mapping is where many impact assessments fall short — focusing only on the direct users of a system while missing the people most likely to be harmed by it. The people operating an AI system are rarely the people most affected by its outputs.
The people and organisations who deploy and run the AI system. Have the most visibility into its operation and the most direct ability to intervene. Often carry legal responsibility as "deployers" under the EU AI Act.
People who interact with the system's interface directly — the hiring manager using the CV screening tool, the doctor using the diagnostic assistant. Often conflated with "subjects" but distinctly different in their relationship with the system.
People about whom the AI system makes decisions or inferences. Often don't know the system exists. Frequently the group most at risk of harm — least empowered to challenge decisions made about them. Must be explicitly identified and prioritised.
Communities, families, competitors, or social groups affected by the system's outputs without direct involvement. A predictive policing system's subjects' families are indirect affected parties. Often completely absent from impact assessments.
Groups with less power, resources, or legal protection to challenge AI decisions — including people with disabilities, low-income communities, ethnic minorities, children, elderly people. Must receive heightened scrutiny in harm analysis. ISO/IEC 42005 specifically flags these groups.
Regulators, auditors, and oversight authorities who need to verify compliance. Their perspective — what evidence of due diligence they require — should inform how the assessment is structured and documented from the start.
Identifying stakeholders on paper is the minimum. ISO/IEC 42005 and the HUDERIA methodology both emphasise that stakeholder engagement — actually consulting the people affected — improves the quality of risk analysis, builds transparency and trust, and often surfaces harms that internal teams have missed. For high-risk systems, genuine consultation with affected communities is increasingly expected by regulators and sophisticated procurement teams. "Checkbox consultation" — brief surveys that don't influence design — is becoming legally and reputationally inadequate.
Operators: HR department, platform vendor
Direct users: Hiring managers, HR coordinators
Subjects: Job applicants — especially those from groups historically underrepresented in the industry
Indirect affected parties: Communities where hiring concentrates economic opportunity; current employees whose performance data trains the model; rejected candidates' families
Vulnerable groups: Applicants with disabilities; career changers (non-linear CVs); candidates from non-traditional educational backgrounds
Regulatory bodies: Employment regulators; data protection authorities; EU market surveillance authority (under the AI Act, recruitment AI is Annex III high-risk)
Prioritise stakeholder engagement based on three factors: power (can they influence the system?), legitimacy (do they have a recognised claim on the system's outcomes?), and urgency (are they at immediate risk of harm?). Groups with low power, high legitimacy, and high urgency — typically subjects from vulnerable groups — warrant the most intensive engagement and the most rigorous harm analysis. Groups with high power and low urgency may require less direct engagement but need to be kept informed.
The vocabulary of impact assessment — the terms that signal you understand the methodology, not just the concept.
The conversations you'll have when clients ask about AI impact assessments — whether they're trying to understand what they need to do, or trying to understand why it's worth doing.
Frame it not as a cost but as an investment that pays out in three ways: it reduces the probability of a harm event; if a harm event occurs, it demonstrates due diligence that limits legal and reputational exposure; and it builds the stakeholder trust that enables faster scaling of AI across the organisation. The clients most resistant to impact assessments are usually those most likely to need them.
Impact assessment is a methodology — understanding the steps and why each matters is more important than memorising definitions.
The standards, methodologies, and frameworks that define how AI impact assessments should be conducted and documented.