AI Governance Frameworks
Governance is no longer optional. It is the law.
The EU AI Act entered into force in August 2024. First obligations applied in February 2025. Full application from August 2026. Maximum penalties: €35M or 7% of global annual turnover — whichever is higher.
Yet KPMG research (2024) finds that 65% of boards lack sufficient visibility into how AI is being used within their organisation. That is not a technology gap. It is a governance gap — and it creates fiduciary exposure at board level.
7%
of global turnover — maximum EU AI Act penalty
EU AI Act, 2024
65%
of boards lack AI visibility — a fiduciary risk
KPMG, 2024
35%
of organisations have a formalised AI governance framework
Deloitte, 2024
72%
deploying AI faster than governance can manage
PwC, 2024
The EU AI Act risk classification framework
Every AI system deployed by or within your organisation must be classified under the EU AI Act. Your obligations — and your exposure — depend entirely on which tier applies
Banned outright — effective February 2025
Real-time biometric surveillance in public, social scoring systems, manipulation of vulnerable groups
Mandatory conformity assessment, human oversight, technical documentation, EU database registration
AI in HR/recruitment, credit scoring, critical infrastructure, law enforcement, biometric identification
Transparency obligations — users must be informed they are interacting with AI
Chatbots, content generation systems, deepfakes
Voluntary codes of conduct recommended; no mandatory requirements
AI-enabled spam filters, basic recommendation systems
ISO 42001
The international standard for AI governance
ISO 42001, published in December 2023, is the world's first international standard for AI Management Systems. Structured analogously to ISO 27001 for information security, it provides enterprises with a risk-based, continuously improving architecture for governing AI across its full lifecycle.
ISO 42001 certification is emerging as a procurement condition for enterprise AI vendors and deployers — and is positioned as a pathway to demonstrating compliance with the EU AI Act's high-risk system requirements. Read the standard.
Risk-based AI governance
Structured risk assessment for every AI system, proportionate to its potential impact
Accountability structures
Clear ownership and accountability chains from AI systems to senior leadership
Impact assessments
Mandatory assessment of AI systems' impact on individuals, groups, and society
Human oversight
Documented mechanisms for human review and intervention in AI-driven decisions
Continuous improvement
PDCA (Plan-Do-Check-Act) cycle for ongoing governance refinement
How we build your AI governance framework
Five stages from regulatory exposure mapping to a fully embedded, board-level governance architecture. All outputs are documented and retained by your organisation
Stage 01
Regulatory Exposure Mapping
We map every AI system in your organisation against the EU AI Act risk classification tiers — Unacceptable, High, Limited, and Minimal risk. For each high-risk system, we identify the specific obligations that apply: conformity assessments, technical documentation, human oversight requirements, and registration in the EU AI database
Stage 02
ISO 42001 Framework Design
We design your AI Management System to ISO 42001 — the world's first international standard for AI governance, published December 2023. Structured analogously to ISO 27001 for information security, ISO 42001 provides a risk-based, continuously improving governance architecture that satisfies both regulatory and board requirements
Stage 03
Board-Level Accountability Structure
We establish clear accountability chains from AI system owners to the board. Only 9% of Fortune 500 boards have members with substantive AI expertise (Spencer Stuart, 2024). We design the reporting structures, terms of reference, and oversight mechanisms that close the governance gap — without requiring board-level technical knowledge
Stage 04
Conformity Assessment & Documentation
For high-risk AI systems under the EU AI Act, we conduct the conformity assessment process and produce the technical documentation required before deployment. This includes data governance documentation, model cards, human oversight protocols, and the post-market monitoring plans required by law from August 2026
Stage 05
Governance Embedding & Training
We embed governance into your AI development lifecycle — not as a checkpoint at the end, but as a design constraint from the start. We train your product, engineering, and compliance teams on governance-by-design principles, ensuring your organisation retains the capability to govern AI at scale without ongoing external dependency
Close the governance gap before the regulator does
Your competitors are already building AI governance infrastructure. The EU AI Act is already enforcing. The question is not whether to govern AI — it is whether your governance is designed to enable competitive advantage or merely avoid penalty.
See also: AI Strategy · Responsible AI · Our Code of Conduct