Legal in London, Banned in Brussels: The AI Regulatory Arbitrage Problem
By Khaled Shivji, Principal AI Consultant · Exec x AI (formerly S.AI.L) · EMEA & APAC · www.execxai.com
Exec x AI (formerly S.AI.L) Look Out Series #2 · 11 March 2026 · execxai.com/blog
Listen to this article
TL;DR
If your AI governance strategy is “we’ll figure it out later,” then “later” just arrived. A single AI system can be legal in London, banned in Singapore, and a lawsuit waiting to happen in Brussels; all at the same time. Contracting your way out of liability for using AI is becoming more difficult. Insurers are pulling coverage for AI-driven decisions. Create a strategy that maps your exposure. Audit your vendors. Govern your agents this quarter. Forward this email to your head of risk/compliance, your general counsel or your CIO
The problem
A single AI system deployed across your organisation may be compliant in Brussels, non-compliant in Dubai, operating in a legal grey zone in Washington, and completely illegal to deploy within Singapore.
Why? In the race to develop AI supremacy, countries seek to regulate (or deregulate) in a shift towards technological nationalism. Call it AI sovereignty, or the golden age for American AI
“Switching from Big Tech to Brit Tech, for example simply puts the toxic cocktail of AI in the hands of UK oligarchs rather than US or Chinese tech giants” (Dan McQuillan, 2026)
Closed box or open source?
Black box or closed box AI systems (e.g. ChatGPT and Gemini), as well as the algorithms used within self-driving vehicles and fraud detection systems, present issues and yet are treated more favourably under the EU AI Act despite the fact that:
- They are opaque: Impossible to inspect internal weights and biases
- They are weird: They produce outputs that defy human intuition. OpenMind (now Google DeepMind)'s AlphaGo's Move 37 was creative and unique and judged to be something no human would ever have made
- They are unpredictable: Type the same prompt into these systems and they'll often produce different outputs
- It's hard to justify why they make decisions: They lack human-intelligible reasoning
Open source models such as Llama 4 and Mistral face enhanced obligations including comprehensive evaluation, adversarial testing, incident tracking, cybersecurity protections, and detailed technical documentation
“A January 2026 analysis for law firms handling confidential client data concluded: ‘Open-source LLMs can be safe—when you deploy privately and lock down the basics’. But the compliance burden is substantial.”
Legal Soul, 2026 (Source: legalsoul.com)
The risks are real, and insurance coverage for AI usage is being curtailed
“AI is at present in product liability, in professional indemnity (PI). If you're a law firm and you use AI to do contract reviews, what if it doesn't pick something up?”
Carlo Ramadoro, head of cyber and technology insurance at Lockton (Source: Insurance Times)
“Insurers increasingly view AI models' outputs as too unpredictable and opaque to insure… it's too much of a black box.”
Dennis Bertram, head of cyber insurance at Mosaic (‘Insurers retreat from AI cover as risk of multibillion-dollar claims mounts’, Financial Times, 23 November 2025)
Regulatory arbitrage is an existential risk. Why?
- Your risks: an AI-powered process designed and validated in one jurisdiction may breach mandatory requirements in another, forcing you to maintain parallel workflows or withdraw the AI's capabilities from specific markets. Splitting cross functional workflows can severely damage operational efficiency, none worse so than in post-merger programmes
- Risks for your clients: if they depend on outputs from your AI systems, they inherit your compliance risk. General counsel offices are pushing back. We're seeing AI-specific indemnity clauses appearing in procurement contracts, requiring vendors to warrant compliance with the EU AI Act regardless of where the model was trained. Some clients now require contractual commitments to data provenance, with penalties for non-disclosure. The direction of travel is clear: your clients will transfer AI compliance risk to you through contract. If your governance is not ready, your commercial terms will suffer
The firms getting it right exhibit two qualities:
1) They acknowledge and apply compliance into their AI architecture from inception. In effect this positions regional regulatory divergence as a design constraint rather than a deployment afterthought.
2) They recognise that the cost of proactive compliance is substantially cheaper and less disruptive than the cost of reactive enforcement.
Four risks you need to address now
Risk 1: Your AI supply chain crosses regulatory boundaries you may not have mapped
Your legal team needs to trace the provenance of every model's training data before you deploy across borders. If your vendor cannot provide that information, you are carrying unquantified legal risk on your balance sheet. The UK, EU and Australia have taken fundamentally different approaches to how copyright applies to AI training data, and the divergence is widening.
The UK High Court ruled in November 2025, in Getty v Stability AI, that model weights do not store or reproduce copyrighted works. The EU's text and data mining exception under the Copyright Directive requires rights-holder opt-out mechanisms. Australia rejected the Productivity Commission's proposed copyright carve-out entirely. A model trained lawfully in the UK may face infringement claims in Germany. These are not hypothetical conflicts.
Risk 2: Your vendors' voluntary commitments are not delivering
Audit your vendor commitments against independently measured performance
If your AI governance framework relies on vendor pledges to responsible AI principles, you should know those pledges are not being honoured. A study published in the AAAI/ACM Conference on AI, Ethics and Society evaluated the sixteen companies that signed White House voluntary AI commitments:
- Average compliance was 53%
- Model weight security commitments averaged 17%, with eleven of sixteen companies scoring zero
- Stanford's 2025 Foundation Model Transparency Index recorded average scores falling from 58/100 in 2024 to 40 in 2025
- Meta's score collapsed from 60 to 31
Risk 3: Agentic AI has no regulatory home
No jurisdiction has enacted binding agentic AI legislation
If you are deploying agents that make procurement decisions, manage customer interactions or execute transactions, you are operating in a governance vacuum. McKinsey reported 80% of organisations have already encountered risky behaviour from AI agents. Georgetown's Centre for Security and Emerging Technology mapped 950 AI governance documents and found multi-agent risks among the least covered subdomains. Singapore published the first state-backed agentic AI governance framework in January 2026. NIST issued a Request for Information on AI agent security the same month.
Risk 4: Copyright creates a direct financial liability for every AI deployment
If your AI systems were trained on data sourced without clear provenance, your legal exposure is growing with every ruling
The Bartz v Anthropic settlement of $1.5bn in August 2025 established a benchmark of roughly $3,000 per copyrighted work used in training. The New York Times v OpenAI litigation saw a judge compel production of twenty million anonymised ChatGPT logs in January 2026. The EU Parliament endorsed compulsory licensing for AI training data. The UK consultation drew 11,500 responses, with 88% supporting mandatory licensing.
Three things to do this quarter
1. Map your AI regulatory exposure by jurisdiction. Identify which systems are high-risk under the EU AI Act, which require algorithm filings in China, and which fall under DIFC Regulation 10 in the UAE. If you cannot answer these questions, you have a gap.
2. Audit your vendor commitments. Compare what your AI providers pledged against what they delivered. The Stanford Transparency Index and the AAAI/ACM voluntary commitments study provide benchmarks you can use today.
3. Build governance for agentic systems before your agents create liability. The Singapore framework provides a practical starting point. ISO 42001 provides the management system backbone.
AI liability clauses: what your contracts actually say about your risk
Twelve risk areas. One table. Your role determines your exposure.
You signed a services agreement. Your supplier uses AI to deliver some of the work. Your contract probably says nothing specific about AI liability. That gap is your risk
The table below identifies who carries the risk (supplier, customer, or both), what the clause covers, and whether you need to monitor it
The legal analysis is based on English law. If you operate cross-border, the EU AI Act adds further obligations with extraterritorial reach
Core reference
AI liability clause map
We analysed clauses that were found within template contracts detailing how the risk of using AI systems ought to be apportioned between supplier and customers. If your existing services agreement contains no AI-specific terms, every one of these twelve risk areas defaults to generic contract language that was drafted before AI existed. Generic language creates ambiguity. Ambiguity benefits the party that did not cause the loss. The usual caveats apply (Prepared for general information purposes. Tailored with reference to English law, this is not legal advice. Seek qualified legal advice for your specific circumstances. Exec x AI (formerly S.AI.L) can recommend law firms that you can approach for counsel and guidance, please contact us at humans@execxai.com)
| Risk area | Who is liable? | Severity | Contractual obligations | Action points |
|---|---|---|---|---|
| Liability caps and exclusions | SupplierCustomer | High | Standard services agreements cap the supplier’s liability at a percentage or multiple of fees. Liability for data loss, lost profits and lost revenue is typically excluded entirely. Data protection claims are subject to a separate monetary cap. Without AI-specific carve-outs, these generic caps apply to AI failures too. | Check whether AI-caused losses fall within your existing cap or sit in an excluded category. A supplier’s AI error that wipes your dataset may be excluded under “data loss”. Negotiate AI-specific sub-caps or carve-outs before signing. |
| Standard of care | Supplier | High | The default contractual obligation is “reasonable skill and care”. For AI services, this may be insufficient. Customer playbook should stipulate measurable performance targets, accuracy thresholds and specific criteria for judging AI training quality. AI proficiency raises the bar: if AI tools surpass human accuracy, failing to use them could itself breach the standard of care. | Generic skill-and-care wording gives you no objective benchmark for AI output quality. Specify accuracy rates, error tolerances, response times and testing criteria in your statement of work. If you are the supplier, define what “reasonable” means for AI-assisted deliverables before the client defines it for you in court. |
| IP ownership of AI outputs | SupplierCustomer | High | Under s.9(3) CDPA 1988, the author of a computer-generated work is the person who made the arrangements necessary for its creation. Courts have not settled who that is in an AI context. Standard agreements assign deliverable IP to one party, but rarely address AI-generated works specifically. Lack of express terms increases dispute risk. | Do not rely on default IP law. Specify in the statement of work who owns deliverables, results, derivative works and usage data produced by the AI system. Cover what happens on termination. If you are the customer, require assignment. If you are the supplier, retain ownership and grant a licence. |
| IP infringement by AI output | Supplier | High | AI systems can produce output that infringes third-party copyright, trade marks or patents. Pro-customer services agreements include supplier indemnities for IPR infringement of deliverables. Customer playbooks should feature a detailed compliance checklist flagging known IP infringement risks. Some AI software providers offer indemnities for IP infringement, though usually at an additional fee. | If you are the customer, insist on a supplier indemnity for third-party IP claims arising from AI-generated deliverables. If you are the supplier, understand what your AI platform’s own indemnity covers (and what it excludes) before you pass a broader indemnity upstream. |
| Confidential know-how and trade secrets | Customer | High | Provider terms may include broad rights to access, retain and use customer data across other clients’ projects. AI deployment raises the risk of disclosing trade secrets and creating derivative works from proprietary methodologies. Cloud-based AI services amplify this risk compared with on-premise deployment. | Restrict the supplier’s right to use your confidential data to what is strictly necessary for your project. Consider on-premise deployment if client confidentiality is central to your business (law firms, financial advisers). If using cloud AI, require detailed information on security measures and data isolation. |
| Data protection and GDPR compliance | SupplierCustomer | High | Where personal data is processed by AI, UK GDPR applies. The controller (usually the customer) has direct compliance obligations and may face fines. The processor (usually the supplier) is also directly liable. Contracts must include prescribed data processing terms. AI amplifies risk through large-scale data aggregation, bias in outputs, and the potential for re-identification of anonymised data. DPIAs are required for high-risk processing. | Confirm your role (controller or processor) before you sign. Include data processing schedules with prescribed terms. Conduct a DPIA for any AI project involving personal data. Review whether the AI system transfers data outside the UK and ensure adequate safeguards are in place. |
| Negligence and tort liability | SupplierCustomer | Medium | Common law duties of care apply to AI providers and users. The categories of negligence are open-ended, and AI will generate new duty-of-care situations. If AI surpasses human performance in a field, the standard of care shifts: failing to use AI could itself constitute negligence. For autonomous or mobile AI, courts are likely to extend strict liability under Rylands v Fletcher. Product liability under the Consumer Protection Act 1987 extends to defective AI-enabled products. | Your contractual liability cap will not protect you against third-party tort claims. If your AI system causes loss to someone outside the contract, they sue in negligence. Assess whether your AI use case creates foreseeable risks of harm. If you deploy AI in safety-critical contexts, assume a court will hold you to a high standard of care. |
| Regulatory compliance (EU AI Act) | SupplierCustomer | High | The EU AI Act has extraterritorial effect and covers any AI output available within the EU. It prohibits certain AI practices and imposes compliance duties on high-risk AI systems. Different obligations apply to providers, deployers, distributors and importers. Article 4 requires AI literacy among staff. Prohibited practices and general-purpose AI provisions are already in force. | If any output from your AI system reaches the EU, you are in scope. Identify your role under the Act (provider, deployer, etc.) and map your obligations. Include contractual rights to renegotiate or terminate if incoming regulation changes the deal economics. Budget for the lead time and cost of bringing your AI system into compliance. |
| Insurance coverage | SupplierCustomer | Medium | Professional indemnity insurance may not cover AI-related claims. Standard PI and D&O policies are beginning to impose AI-specific exclusions; other insurers are beginning to assess the risk. Ethically, decision making capabilities exercised by an AI rest with a human professional, and not the AI. The same principle should be reflected within contractual clauses. The key risk for both parties is if pre-written insurance does not respond to an AI claim, the supplier risks insolvency and the client risks inadequate recovery of damages. | Check with your insurer whether AI-related claims are covered under your existing PI policy. Watch for exclusions or limitations added at renewal. If you are the customer, require the supplier to maintain insurance that covers AI-related liabilities and verify this annually. If insurance costs for AI are disproportionately high, treat that as a signal of elevated risk. |
| Output verification and governance | SupplierCustomer | Medium | Contracts should specify which party is responsible for verifying AI outputs. Playbooks should dictate who is contractually responsible for logging by design, audit rights (first and third party), red-teaming exercises and bias bounties. If the AI provider outsources to a third-party AIaaS provider, governance obligations may become diluted across the supply chain. | Agree who verifies outputs and document that responsibility. Require logging of AI decisions and retain audit rights. If the supplier sub-contracts to an AIaaS provider, insist on flow-down governance obligations. Specify how erroneous outputs will be identified, rectified and their impact mitigated. |
| Ethical and reputational risk | SupplierCustomer | Medium | AI outputs can be unexpected, biased or discriminatory. Training data may embed historical bias. This is where the organisation utilising AI must establish its own office of responsible AI which formalises a governance structure geared around an AI ethics committee, defined AI product liabilities, and staff training on identifying ethical risks. Anti-discrimination laws and data protection regimes provide individuals with rights to challenge automated decisions. | Establish an internal AI ethics framework before your first project. Audit training data for bias. Ensure individuals can opt out of fully automated decisions where required. AI washing (exaggerating AI capabilities) is a separate reputational risk; represent your AI use accurately. |
| Termination, exit and data return | Customer | High | Ownership and licence terms must address what happens on termination. Failing to do this risks vendor lock-in. Transition assistance is expensive to bake into a contract but the cost of inaction represents a sunk cost fallacy. Without these terms, AI model lock-in creates switching costs and ongoing dependency. | Negotiate exit provisions at the start, not the end. Specify data return formats, transition assistance periods and destruction obligations. Address whether the supplier retains any rights to use your data post-termination for model improvement or other clients. |
Continuous monitoring
Risks that require ongoing attention
Regulatory horizon
The EU AI Act’s full provisions take effect in mid-2026. The UK has no AI-specific legislation yet, but regulators are publishing sector guidance. Each regulatory change may alter your contractual obligations or commercial terms.
Review: quarterlyInsurance exclusions
PI insurers are assessing AI risk. Expect new exclusions, limitations or questionnaires at renewal. A professional consultant’s AI liability is personal; the AI system itself cannot be held liable or insured.
Review: at each renewalIP case law
Courts are still defining who owns computer-generated works. The Supreme Court’s Emotional Perception AI ruling (2026) shifted patent law. Copyright ownership of AI output remains unsettled. New decisions may change your contractual position.
Review: quarterlyData protection enforcement
The ICO and EU data protection authorities are increasing scrutiny of AI systems that process personal data. Enforcement actions create precedent that affects your DPIA obligations and processor contract terms.
Review: quarterlySupplier AI practices
Your supplier may change its AI platform, sub-contract to a new AIaaS provider, or alter how it uses your data. Governance rights are only useful if you exercise them. Audit regularly.
Review: semi-annuallyStandard of care benchmarks
As AI accuracy improves, the threshold for professional negligence shifts. What was acceptable without AI may become negligent when AI is available. Monitor your sector’s evolving expectations.
Review: annually