Session Introduction

Facilitator Brief — Open Peer Discussion

This is a peer discussion, not a presentation.
Every person in the room has seen AI deliver or disappoint in a real institution.
The goal of the next 75 minutes is to find out why — and how.

🎯 Topics 3 Discussions
Per Topic ~25 Minutes
💬 Format Discussion
🔒 Rules Chatham House

Operational Efficiency & Compliance

Automating inefficient processes — and the regulatory risk that comes with it.

~25 minutes allocated (Part 1: Operational Efficiency· Part 2: Compliance & Legal)

What the data says right now

$340B
in annual value AI could unlock in banking alone, predominantly through productivity gains and cost reduction
McKinsey Global Institute, 2023
15–25%
cost savings reported by firms that scaled AI across operations, within 18 months of deployment in targeted functions
Deloitte AI in Financial Services Survey, 2024
72% vs 19%
say AI has changed how repetitive workflows run — but only 19% are measuring those gains rigorously
Gartner Financial Services AI Adoption Survey, 2025

Familiar Patterns:

01
Process automation on top of broken processes
RPA and AI are being applied to workflows that were never designed for efficiency. The result is faster waste.
02
Shadow AI undermining governance
Staff are using consumer AI tools (ChatGPT, Copilot) outside of sanctioned environments. IT and Risk are aware. Leadership frequently is not.
03
The "pilot cemetery"
Proof-of-concepts proliferate but do not scale. The reason is almost never the technology — it is unclear ownership, no change management, and no one accountable for production.
04
Measuring inputs, not outcomes
Institutions count AI deployments, not value delivered. The KPIs are vanity metrics. No one is measuring what changed for the customer or the P&L.
Up Next
Continued — Compliance & Legal
AI accelerates everything — including the things that can get you fined.

Compliance & Legal

AI accelerates everything — including the things that can get you fined.

What the data says right now

High Risk
EU AI Act classification for most AI in credit decisioning, insurance underwriting, and AML — requiring full documentation, human oversight, and conformity assessment before deployment
EU AI Act, Article 6 & Annex III, 2024
67%
of compliance officers say AI has increased their institution's regulatory risk exposure over the past 12 months — not reduced it
Kroll Global Fraud & Risk Report, 2024
$2.7B
in global AML fines in 2023 — regulators are now beginning to scrutinise the AI and model-based systems behind compliance decisions, not just the outcomes
ACAMS Global AML Fines Report, 2023

Familiar Patterns:

01
Models in production without documentation trails
AI systems deployed in the 2022–2023 pilot race are running in production with no formal model risk documentation, validation records, or named owner accountable for outcomes.
02
AI making decisions that cannot be explained to regulators
Credit AI, AML screening models, and insurance pricing tools are generating legally required explainable decisions — but the model architecture cannot produce a compliant explanation.
03
Compliance teams excluded from AI design
AI is built by technology and data science teams; compliance is brought in at the end to sign off. By that point the architecture is fixed and the governance gaps are structural.
04
Shadow AI creating ungoverned liability
Employees using consumer AI for legally sensitive tasks — drafting contracts, advising clients, interpreting regulations — create liability exposure that no one in risk is tracking.

The EU AI Act does not ask whether your AI works.

It asks whether you can prove it works safely, fairly, and accountably.

That is a technology question and a governance question many institutions have not yet answered.

EU AI Act, Title III — High-Risk AI Systems, 2024
Up Next
Topic 2 — Generating Alpha
You have the data and maybe the model but not everyone has the speed.

Generating Alpha

"Everyone has the data. Not everyone knows what to do with it."

25 minutes allocated

What the data says right now

65%
of hedge funds now incorporate alternative data into investment processes — satellite imagery, web scraping, NLP on filings — up from 35% in 2020
Man Group Annual Report, 2024
$143B
projected size of the global alternative data market by 2030 — growing at 54% CAGR as AI lowers the cost of signal extraction
Grand View Research, 2024
15–30% vs reality
AI signals show 15–30% higher Sharpe ratios in backtests — but significantly lower out-of-sample performance when the strategy becomes crowded
Two Sigma Research, 2023

Familiar Patterns

01
Backtest inflation and overfitting
Models optimised on historical data look brilliant in backtests and underperform live. The gap is almost never acknowledged publicly — it shows up quietly in redemptions.
02
Alternative data with legal and ethical exposure
Some alternative data sources — scraped web data, location data, inferred personal data — carry regulatory risk under GDPR, FCA, and SEC rules that quantitative teams may not have assessed.
03
Alpha decay from strategy crowding
When competing firms deploy the same model on the same data, the edge disappears and reversal risk increases. Most firms discover this too late, after capital has been deployed.
04
LLM hallucination in investment research
Analyst teams using LLMs to synthesise earnings calls and filings are encountering confident, plausible, and factually wrong summaries — with no reliable mechanism to detect them at scale.

Most AI in investment management is not generating new alpha.

It is helping humans express existing convictions with less friction.

That is genuinely valuable.
But it is not the same thing as finding signal no one else has found.

Synthesised from Two Sigma Research, 2023 & BCG Asset Management AI Survey, 2024
Up Next
Topic 3 — Implementation Challenges
The pilot worked. Now what?

Implementation Challenges

The pilot worked. Now what?

25 minutes allocated

What the data says right now

18 months
average time from AI pilot to production in financial services — nearly three times longer than comparable technology deployments in fintech and tech-native firms
McKinsey Global AI Survey, 2024
60%
of AI proofs-of-concept in financial services never reach production. The pilot cemetery is not a metaphor — it is a documented organisational pattern.
Gartner Financial Services AI Report, 2024
70%
of UK bank IT budgets are spent maintaining legacy infrastructure — leaving 30% for everything else, including AI deployment, modernisation, and security
Bank of England Technology Survey, 2023

Familiar Patterns

01
Pilots built on curated data that doesn't reflect production reality
The pilot uses clean, prepared, hand-selected data. Production exposes the AI to the actual data estate — messy, incomplete, inconsistently labelled. The model fails or degrades rapidly.
02
AI teams operating independently of enterprise architecture
AI labs and innovation teams build solutions that cannot be integrated into core systems because they were never designed to connect. The handoff from lab to production is a cliff edge.
03
No clear production ownership
When a model in production degrades, drifts, or fails — who calls whom? Most institutions discover their answer to this question only when something goes wrong.
04
Cost unpredictability of LLM APIs at scale
LLM API costs that were negligible at pilot scale become CFO conversations at production scale. The business case approved for the pilot was not modelled at volume. The project stalls.

The firms furthest ahead on AI implementation are almost never the ones with the best AI strategy.

They are the ones with the cleanest data.

Your AI programme is only as fast
as your data estate will allow.

Synthesised from McKinsey Global AI Survey, 2024 & JPMorgan Technology & Data Report, 2024
Up Next
Section 2 — Live Demo
Let's see some production grade AI applications in practice.