← Portfolio Steven Picton · AI Knowledge
Tools & Methods

AI Impact Assessment

ISO/IEC 42005:2025 · EU AI Act · NIST RMF aligned 6 sections · Interactive

An AI Impact Assessment is how you prove — to regulators, clients, and your own board — that you've thought seriously about what your AI system could do to people before you deploy it. It's the bridge between governance principles and operational reality.

What it is

A structured pre-deployment analysis

An AI Impact Assessment (AIIA) is a systematic evaluation of an AI system's potential effects — positive and negative — on individuals, groups, organisations, and society. It identifies who is affected, how they could be harmed, how likely and severe those harms are, and what mitigations are in place. Think of it as a due diligence exercise that creates a documented, auditable record of responsible decision-making.

Why it matters now

It's becoming mandatory — and expected

The EU AI Act requires a Fundamental Rights Impact Assessment for deployers of high-risk AI systems. ISO/IEC 42005:2025 — published in 2025 — provides the internationally recognised standard methodology. ISO/IEC 42001 (the AI management system standard) requires organisations to assess AI consequences as part of their management system. Beyond compliance, organisations that conduct systematic impact assessments experience significantly fewer AI-related incidents than those that don't.

Clients increasingly ask for evidence of impact assessment as part of procurement. Having a documented, rigorous process is fast becoming a competitive differentiator.

Risk assessment vs impact assessment

They're related but distinct — and the distinction matters

Risk assessment focuses on the likelihood and severity of potential negative events — it sets strategies for mitigation. Impact assessment focuses on the foreseeable effects on people and society — it identifies who is affected and how. You need both. ISO/IEC 42005 specifically clarifies this distinction: impact assessments examine societal and individual consequences; risk assessments examine organisational exposure. In practice, they're conducted together but structured separately in documentation.

The timing question

Start early — not just before deployment

The most common mistake is treating an impact assessment as a pre-launch checklist. ISO/IEC 42005 is explicit: assessments should be integrated throughout the AI lifecycle — from design and development through deployment and post-market monitoring. An assessment done at design stage can influence architecture decisions. An assessment done the week before launch is mostly documentation of decisions already made, with little ability to change outcomes.

A rigorous AI impact assessment follows a structured sequence. Each step builds on the last — skip one and the whole assessment is weakened. The process is iterative, not linear: findings at later steps will send you back to earlier ones.

1
Define scope

Describe the system and its purpose

Start by precisely documenting what the AI system is, what it does, its intended purpose, and where it sits in the deployment lifecycle. Include: the type of AI (classification, generation, recommendation, prediction), the input data it uses, the outputs it produces, and who receives those outputs. Also document what the system explicitly does not do — boundary conditions matter as much as intended functions. This scoping document becomes the foundation everything else is built on.

Key questions
  • What decision or action does this system enable or automate?
  • What are its intended use cases — and what uses are out of scope?
  • What data does it use as input, and where does that data come from?
  • At what stage of development or deployment is it currently?
2
Map stakeholders

Identify everyone affected — directly and indirectly

Go beyond the obvious users. An AI hiring tool's stakeholders aren't just the hiring managers using it — they include job applicants (especially those from historically underrepresented groups), employees whose data trains it, and communities where hiring concentrates economic opportunity. Identify groups who benefit, groups who could be harmed, and groups who might be affected without knowing the system exists. Pay particular attention to vulnerable groups: those with less power to challenge or appeal decisions made about them.

Stakeholder categories
  • Direct users (operate the system)
  • Subjects (decisions made about them)
  • Indirect affected parties (affected by outcomes)
  • Vulnerable or marginalised groups
  • Regulatory bodies and oversight authorities
3
Identify harms

Systematically surface potential negative impacts

This is the analytical core of the assessment. For each stakeholder group, evaluate potential harms across multiple dimensions: physical, psychological, financial, reputational, and societal. Critically — look beyond intended uses. Anticipate misuse, over-reliance, and unintended applications. Consider failure modes: what happens when the system is wrong? What happens when it's right but the context has changed? Consider second-order effects: how might this system change behaviour, power dynamics, or social norms over time?

Prompt questions
  • Who could be worse off because of this system's outputs?
  • What happens to people when the system is wrong?
  • Could this system be misused in ways we haven't designed for?
  • Does this system create or amplify power imbalances?
4
Assess severity

Score each harm by scale, scope, likelihood, and reversibility

Not all harms are equal. A useful severity framework evaluates four variables for each identified harm. Scale — how many people could be affected? Scope — what proportion of the affected group is impacted? Likelihood — how probable is this harm given the system's design and deployment context? Reversibility — can affected individuals recover, or is the harm permanent? The combination of these variables produces a severity score that drives prioritisation. The HUDERIA methodology (Council of Europe) uses exactly this framework.

5
Mitigate

Design and assign control measures for each material harm

For each harm that exceeds your risk tolerance threshold, define specific mitigations. These fall into four types: technical controls (bias detection, model constraints, output filtering), procedural safeguards (human review requirements, escalation protocols, override mechanisms), governance mechanisms (accountability assignments, audit schedules, incident reporting), and transparency measures (disclosure to affected individuals, documentation for regulators). Each mitigation must have a named owner, a timeline, and a measurable outcome. Mitigations without owners don't get implemented.

6
Monitor & update

Build ongoing evaluation into the deployment lifecycle

An impact assessment is not a one-time document — it's a living record. Establish monitoring mechanisms that track real-world performance against the harms identified. Define clear triggers for reassessment: when the system is substantially modified, when it's deployed in a new context, when incident rates change, or when the regulatory environment shifts. The EU AI Act requires conformity assessments to be updated whenever a system undergoes substantial modification. Build feedback channels so users and affected individuals can report concerns — this is often your earliest signal that something is going wrong.

The documentation imperative

If it isn't documented, it didn't happen

Every stage of an impact assessment — processes followed, decisions made, mitigations applied — must be documented to provide a clear and auditable record. This isn't bureaucracy: it's the mechanism by which accountability is established, trust is built with regulators, and institutional knowledge is preserved when team members change. The EU AI Act requires technical documentation to be available to authorities on request. ISO/IEC 42005 specifies what that documentation must contain.

Identifying harms is the hardest and most important part of an impact assessment. It requires moving beyond obvious technical failures to understand the full human and societal consequences of an AI system's outputs.

Categories of harm to assess
Physical harm

Injury, illness, or death resulting from AI-driven decisions or actions. Highest severity — always the first priority.

  • Medical diagnosis errors leading to wrong treatment
  • Autonomous vehicle decisions causing accidents
  • Safety system failures in industrial settings
  • Delayed emergency response due to AI triage errors
Psychological harm

Distress, discrimination, loss of dignity, or erosion of autonomy caused by how an AI system treats people.

  • Discriminatory hiring decisions affecting self-worth
  • Intrusive monitoring creating anxiety in workers
  • Manipulative interfaces exploiting cognitive vulnerabilities
  • Loss of human connection in care settings replaced by AI
Financial harm

Economic loss, denial of opportunity, or unfair treatment in financial decisions caused by AI outputs.

  • Biased credit scoring denying loans to qualified applicants
  • Fraudulent activity enabled by AI bypassing security
  • Job displacement without adequate support mechanisms
  • Insurance pricing that systematically disadvantages groups
Reputational harm

Damage to individuals' standing, relationships, or professional prospects caused by incorrect or unfair AI outputs.

  • False positive fraud flags damaging customer relationships
  • Incorrect criminal risk scores affecting employment prospects
  • AI-generated content falsely attributed to individuals
  • Privacy breaches exposing sensitive personal information
Societal harm

Systemic effects on communities, democratic processes, or social structures that extend beyond individual cases.

  • Amplification of existing social inequalities at scale
  • Erosion of privacy norms through surveillance normalisation
  • Manipulation of political opinion through targeted content
  • Concentration of economic power in AI-enabled monopolies
Environmental harm

Resource consumption, emissions, and ecological impacts from AI systems — often overlooked in impact assessments.

  • Energy consumption of large model training and inference
  • Water usage in data centre cooling systems
  • E-waste from accelerated hardware obsolescence cycles
  • Optimisation decisions that externalise environmental costs
Severity assessment framework
Variable Low Medium High
Scale
How many affected?
Individual or small group Significant subpopulation Large population or systemic
Scope
What proportion?
Small fraction of group affected Meaningful minority affected Majority or entire group affected
Likelihood
How probable?
Unlikely given design & controls Possible under foreseeable conditions Probable or already occurring
Reversibility
Can harm be undone?
Fully reversible with intervention Partially reversible over time Permanent or very difficult to remedy
The misuse imperative

Always assess unintended applications

One of the most consistent findings in AI impact assessments is that harm frequently comes not from the intended use but from unintended applications, misuse, or over-reliance. A system designed to assist doctors may be used to replace clinical judgement entirely. A tool built for fraud detection may be weaponised against whistleblowers. Assessments that only consider the intended use case are systematically incomplete. ISO/IEC 42005 explicitly requires organisations to anticipate foreseeable misuse scenarios alongside intended use.

Stakeholder mapping is where many impact assessments fall short — focusing only on the direct users of a system while missing the people most likely to be harmed by it. The people operating an AI system are rarely the people most affected by its outputs.

The full stakeholder landscape
Operators

The people and organisations who deploy and run the AI system. Have the most visibility into its operation and the most direct ability to intervene. Often carry legal responsibility as "deployers" under the EU AI Act.

Direct users

People who interact with the system's interface directly — the hiring manager using the CV screening tool, the doctor using the diagnostic assistant. Often conflated with "subjects" but distinctly different in their relationship with the system.

Subjects

People about whom the AI system makes decisions or inferences. Often don't know the system exists. Frequently the group most at risk of harm — least empowered to challenge decisions made about them. Must be explicitly identified and prioritised.

Indirect affected parties

Communities, families, competitors, or social groups affected by the system's outputs without direct involvement. A predictive policing system's subjects' families are indirect affected parties. Often completely absent from impact assessments.

Vulnerable groups

Groups with less power, resources, or legal protection to challenge AI decisions — including people with disabilities, low-income communities, ethnic minorities, children, elderly people. Must receive heightened scrutiny in harm analysis. ISO/IEC 42005 specifically flags these groups.

Regulatory & oversight bodies

Regulators, auditors, and oversight authorities who need to verify compliance. Their perspective — what evidence of due diligence they require — should inform how the assessment is structured and documented from the start.

Engagement — not just identification

Consult affected stakeholders — don't just list them

Identifying stakeholders on paper is the minimum. ISO/IEC 42005 and the HUDERIA methodology both emphasise that stakeholder engagement — actually consulting the people affected — improves the quality of risk analysis, builds transparency and trust, and often surfaces harms that internal teams have missed. For high-risk systems, genuine consultation with affected communities is increasingly expected by regulators and sophisticated procurement teams. "Checkbox consultation" — brief surveys that don't influence design — is becoming legally and reputationally inadequate.

Worked example — AI hiring tool

Who are the real stakeholders?

Operators: HR department, platform vendor

Direct users: Hiring managers, HR coordinators

Subjects: Job applicants — especially those from groups historically underrepresented in the industry

Indirect affected parties: Communities where hiring concentrates economic opportunity; current employees whose performance data trains the model; rejected candidates' families

Vulnerable groups: Applicants with disabilities; career changers (non-linear CVs); candidates from non-traditional educational backgrounds

Regulatory bodies: Employment regulators; data protection authorities; EU market surveillance authority (under the AI Act, recruitment AI is Annex III high-risk)

The salience framework

Not all stakeholders require the same depth of engagement

Prioritise stakeholder engagement based on three factors: power (can they influence the system?), legitimacy (do they have a recognised claim on the system's outcomes?), and urgency (are they at immediate risk of harm?). Groups with low power, high legitimacy, and high urgency — typically subjects from vulnerable groups — warrant the most intensive engagement and the most rigorous harm analysis. Groups with high power and low urgency may require less direct engagement but need to be kept informed.

The vocabulary of impact assessment — the terms that signal you understand the methodology, not just the concept.

Assessment framework terms
AI Impact Assessment (AIIA)
The core deliverable
A structured evaluation of an AI system's potential effects — positive and negative — on individuals, groups, organisations, and society. Distinct from a risk assessment (which focuses on organisational exposure) in that it centres on human and societal consequences. Required by the EU AI Act for high-risk systems (as a Fundamental Rights Impact Assessment) and standardised by ISO/IEC 42005:2025.
ISO/IEC 42005:2025
The international standard
Published in 2025, the international standard for AI system impact assessments. Provides a standardised methodology for evaluating societal, individual, and organisational effects of AI systems. Complements ISO/IEC 42001 (AI management systems) and ISO/IEC 23894 (AI risk management). The closest thing to a globally recognised blueprint for how impact assessments should be conducted and documented.
Fundamental Rights Impact Assessment (FRIA)
EU AI Act requirement
The specific form of impact assessment required by the EU AI Act for deployers of high-risk AI systems. Must assess potential impact on fundamental rights — privacy, non-discrimination, dignity, and access to justice. Must be documented and available to authorities on request. Required before deployment and updated when the system is substantially modified.
HUDERIA
Council of Europe methodology
Human Rights, Democracy and Rule of Law Impact Assessment. Adopted by the Council of Europe's Committee on Artificial Intelligence in November 2024. Provides a structured methodology for assessing AI risks to human rights, democracy, and the rule of law. Uses scale, scope, likelihood, and reversibility as the four severity variables. Designed to be flexible across diverse contexts and jurisdictions.
Harm Severity
Scale × scope × likelihood × reversibility
A composite measure of how serious a potential harm is. Assessed across four variables: scale (how many people affected), scope (what proportion of the affected group), likelihood (how probable under foreseeable conditions), and reversibility (can the harm be undone?). High severity = large scale, broad scope, high likelihood, and irreversible. Drives prioritisation of mitigations.
Subject
The person decisions are made about
The individual or group about whom an AI system makes decisions, inferences, or predictions. Distinct from the user (who operates the system). Often the stakeholder most at risk of harm and least empowered to challenge outcomes. A job applicant is the subject of a CV screening tool; the hiring manager is the user. Subjects must be explicitly identified and prioritised in every impact assessment.
Foreseeable Misuse
Beyond intended use cases
Applications, adaptations, or abuses of an AI system that the designer didn't intend but that are predictable given the system's capabilities and context. ISO/IEC 42005 requires organisations to assess foreseeable misuse scenarios alongside intended uses. A document summarisation tool could be misused for surveillance; a sentiment analysis tool could be weaponised for targeted manipulation.
Residual Harm
What remains after mitigation
Harm that remains after all practicable mitigations have been applied. Every system will have some residual harm — the question is whether it falls within the organisation's and society's acceptable threshold. Residual harm must be explicitly documented, disclosed to deployers (by providers), and communicated to affected individuals where appropriate. Ignoring residual harm is a governance failure.
Stakeholder Salience
Prioritisation framework
A framework for prioritising which stakeholders require the most intensive engagement, based on three factors: power (ability to influence the system), legitimacy (recognised claim on outcomes), and urgency (immediacy of potential harm). Groups with low power, high legitimacy, and high urgency — typically subjects from vulnerable groups — receive the highest priority in engagement and harm analysis.
Model Drift
Why post-deployment monitoring matters
The gradual deterioration in an AI system's performance over time as real-world data diverges from training data. Harms identified at assessment stage may materialise later due to model drift — making post-deployment monitoring essential. A system that was fair at launch may become discriminatory as the population it serves changes. Reassessment triggers should include evidence of drift.

The conversations you'll have when clients ask about AI impact assessments — whether they're trying to understand what they need to do, or trying to understand why it's worth doing.

When they ask: "Is an AI impact assessment the same as a DPIA?"
Related but different. A DPIA (Data Protection Impact Assessment) is required under GDPR when processing personal data is likely to result in high risk. It focuses on data protection risks. An AI Impact Assessment is broader — it covers not just data protection but all potential harms to people from the AI system's outputs: physical, psychological, financial, reputational, societal, and environmental. Where an AI system processes personal data, you'll typically need both — and they're best designed to complement each other rather than be treated as separate exercises.
When they ask: "Which of our AI systems actually need a full impact assessment?"
A useful starting filter: any system that makes or significantly influences decisions about people — especially people who didn't choose to interact with it. Under the EU AI Act, all Annex III high-risk systems require a Fundamental Rights Impact Assessment. Beyond legal requirements, the practical question is: if this system produces a wrong or unfair output, who gets hurt, and how badly? High-impact decisions in employment, credit, healthcare, education, or law enforcement almost always warrant a full assessment. Internal productivity tools used by employees who can override outputs are lower priority. The key word is "influence" — even advisory AI that humans technically override can produce harm if those overrides are rare in practice.
When they ask: "How long does an impact assessment take?"
It depends on complexity, but for a high-risk system expect 6–12 weeks for a thorough assessment — longer if genuine stakeholder consultation is required. The temptation is to compress this into a two-day workshop before a product launch. That's not an impact assessment — it's a box-ticking exercise that creates legal liability rather than reducing it. The investment pays back: organisations that conduct thorough impact assessments before deployment consistently experience fewer post-deployment incidents, faster regulatory approval, and stronger client trust. The cost of a proper assessment is a fraction of the cost of a post-deployment harm event.
When they ask: "We already have an ethics review process — do we still need this?"
An ethics review and an impact assessment serve different purposes. An ethics review typically asks "does this align with our values?" — it's principle-based and often conducted internally. An impact assessment asks "who specifically could be harmed, how severely, and what are we doing about it?" — it's evidence-based, structured, and documented for external accountability. Many organisations with mature ethics review processes still lack the structured, documented methodology that regulators expect and that ISO/IEC 42005 defines. The good news is that a strong ethics culture is excellent preparation for building a rigorous assessment process — the values alignment is already there; what's usually missing is the methodology and documentation rigour.
When they ask: "What do we actually produce at the end of this?"
A documented assessment report that contains: system description and scope, complete stakeholder map, identified harms with severity scores, mitigation measures with named owners and timelines, residual harm documentation, and a monitoring and reassessment plan. That document serves multiple audiences simultaneously: your technical teams (what to build), your governance and legal teams (compliance evidence), your board (risk oversight), your regulators (due diligence proof), and your clients (trust building). One investment, multiple uses. And critically — it's a living document, not a one-off report. It should be updated as the system evolves.

The framing that resonates

Impact assessments are insurance that pays dividends

Frame it not as a cost but as an investment that pays out in three ways: it reduces the probability of a harm event; if a harm event occurs, it demonstrates due diligence that limits legal and reputational exposure; and it builds the stakeholder trust that enables faster scaling of AI across the organisation. The clients most resistant to impact assessments are usually those most likely to need them.

Impact assessment is a methodology — understanding the steps and why each matters is more important than memorising definitions.

What are the six steps of an AI impact assessment — in order?
1. Define scope — what the system is, does, and doesn't do.

2. Map stakeholders — everyone affected directly and indirectly, especially subjects and vulnerable groups.

3. Identify harms — potential negative impacts across physical, psychological, financial, reputational, societal, and environmental dimensions.

4. Assess severity — score each harm on scale, scope, likelihood, and reversibility.

5. Mitigate — define controls (technical, procedural, governance, transparency) with named owners and timelines.

6. Monitor & update — ongoing evaluation with clear reassessment triggers.
What four variables make up the harm severity assessment — and what does "high severity" look like?
Scale — how many people affected?
Scope — what proportion of the affected group?
Likelihood — how probable given design and deployment context?
Reversibility — can the harm be undone?

High severity = large scale + broad scope + high likelihood + irreversible. Example: a credit scoring algorithm that systematically denies mortgages to a demographic group (scale: thousands; scope: most of that group; likelihood: high given how the model is designed; reversibility: significant financial and life-opportunity harm that cannot be easily corrected after the fact).
What is the difference between an AI impact assessment and a DPIA?
A DPIA (Data Protection Impact Assessment) is required by GDPR when processing personal data poses high risk. It focuses on data protection risks specifically.

An AIIA is broader — it covers all potential harms from the AI system's outputs: physical, psychological, financial, reputational, societal, and environmental. Not just data protection.

Where an AI system processes personal data, you typically need both. They're designed to complement each other — not duplicate. A DPIA without an AIIA leaves the non-data-protection risks unassessed. An AIIA without a DPIA may miss specific GDPR obligations around data processing.
What is a "subject" in the context of an impact assessment — and why are they often the most important stakeholder?
A subject is someone about whom the AI system makes decisions or inferences — distinct from the user who operates the system. A job applicant is the subject of a CV screening tool; the hiring manager is the user.

Subjects are often the most important stakeholder because they are typically: most at risk of harm (decisions are made about them), least empowered to challenge those decisions, and often unaware the system exists. The Amazon hiring tool case illustrates this perfectly — female applicants were subjects who had no knowledge of or recourse against a system that was systematically downgrading their applications.
A client conducts a two-day workshop before launch and calls it their impact assessment. What's missing?
A pre-launch workshop is better than nothing. But ISO/IEC 42005 is explicit: assessments should be integrated throughout the AI lifecycle, not conducted once as a launch gate.
Three critical gaps. First, timing: an assessment done the week before launch documents decisions already made, with little ability to change outcomes. An assessment done at design stage can actually influence architecture. Second, scope: a two-day workshop rarely includes genuine stakeholder consultation with affected communities — it's usually an internal exercise. Third, continuity: there is no ongoing monitoring, no reassessment triggers, and no living document. If the system changes, scales, or the regulatory environment shifts — there is no process to update the assessment. This is documentation of a point in time, not an impact assessment programme.
You are assessing an AI system used to prioritise customer service queries. Who are the stakeholders — and which group is most likely to be overlooked?
A prioritisation system decides which customer queries get answered first, and potentially which ones get answered at all.
Operators: the customer service managers who configure and oversee the system.
Direct users: customer service agents who see the prioritised queue.
Subjects: customers whose queries are ranked — they have no knowledge of or input into this ranking.
Indirect affected parties: customers whose queries are deprioritised, potentially indefinitely.
Vulnerable groups: customers with accessibility needs, elderly customers, non-native language speakers — groups whose communication patterns may make them appear lower priority.

The most likely to be overlooked: vulnerable customers in the "indirect affected parties" and "subjects" categories — particularly those whose queries are consistently deprioritised because the model has learned to rank certain types of queries lower.

The standards, methodologies, and frameworks that define how AI impact assessments should be conducted and documented.

Primary standards
Frameworks & templates
Further reading