"The most useful thing you can do in the next 75 minutes is say what you actually think."
Tone. Speak as a peer — not a host. Sit down if you can. The moment you stand at a lectern, this becomes a presentation. It isn't.
Pace. Slow. Two minutes is longer than it feels. Let silence do work.
Good morning. thank you for being here today, Someone said to me, the world is being kidnapped by a F1 car, he was referring to how AI is changing our workplaces and our lives.
We have over 20 senior people in this room — most of you work in significant financial institutions buy-side, sell-side and insurance, most of you have to use AI on daily basis and most importnatly make decision about the use of AI in your respecitve firms right now.
The aim of this event is to share your throught and reflections on how AI impacts our industry in certain aspect as we cannot discuss them all in one session. We'll also see some actual use cases.
I'm asking you to express your opinions and views freely about what's working and what isn't. Please provide actual examples to enrich the conversation.
I am here to facilitate — My job is to ask questions, and make sure all people in this room participate and get the chance to express their views and disagreement.
We will cover what leading institutions are doing — but we won't pretend that headline numbers tell the full story. They usually don't.
Let's start first with some house rules: "Everything discussed today is under Chatham House Rules — insights and themes may be taken out of this room; attribution may not."
Views expressed are of the Individuals stating them views and not the institutions they work for. Please keep your phones on silent. Coffee available throughout. We will finish shirtly after 10AM. There is a short demonstrations after the discussions — approximately 3-5 minutes each — before we close.
We will cover three areas: operational efficiency and compliance, generating alpha, and what it actually takes to get AI into production at scale. We have 25 minutes per topic — and I will start each one with a context and a question, Let's dive in.
let's see some AI figures in financial services right now: the opportunity is real and large — $340 billion in addressable value by some estimates. The challenge is equally real. Most institutions are not automating faster enough and their governance and complaince are not adapting appropriately enough to be ready for this systemic shift. Also Most pilots are not reaching large scale production. And the regulatory environment is about to get significantly more demanding. We are going to talk about all of it.
"AI accelerates everything — including the things that can get you fined."
"Automation without redesign is just faster bureaucracy."
Every institution is automating.
But automation without redesign is just faster bureaucracy.
The real question is not whether AI is reducing headcount or cost —
it is whether anyone has fundamentally changed how work gets done.
Many banks are doing the former and reporting the latter. That's the honest truth worth surfacing.
If the group is speaking guardedly: "There's a version of this where AI is doing the equivalent of automating a fax machine. Are any of us guilty of that?"
Do not present these as "this is what good looks like." Use them as prompts to surface the room's experience. If someone works at one of these institutions, ask them to add texture or correct the record.
Reframe: "Let me ask it differently — where has AI disappointed you operationally? What did you build that didn't deliver?"
Negative examples almost always unlock more honest responses than positive ones in a room of peers.
The banks saving the most from AI are not the ones with the most AI.
They are the ones who stopped doing things.
The productivity gain is not in the tool.
It is in the decision to eliminate the process entirely.
Deploy when the conversation has circled around "efficiency" without anyone questioning what they're being efficient at.
Follow with: "Has anyone in this room actually killed a process — not automated it, killed it — because AI gave you the confidence to stop doing it?"
That question generates either a very good story or a very honest silence. Both are useful.
Synthesise 1–2 themes that emerged. Name them explicitly before moving on.
"AI accelerates everything — including the things that can get you fined."
The most powerful models are often the least explainable.
Regulators want both: accuracy and transparency.
The question is not whether your AI is right —
it is whether you can prove to a regulator that it is right, and fair, and auditable.
Most institutions are quietly choosing performance and hoping the regulator doesn't ask. The explainability trade-off is real — and most risk committees haven't been shown it explicitly.
Frame it directly: "How many of you are running AI models in production that you couldn't fully explain to your model risk committee if they asked today?"
Most people won't answer directly. The silence is the answer.
These are structural governance choices, not technology choices. The question is whether the institution has made AI governance a first-class function. Deutsche Bank's external publication is a strong prompt — most firms have internal commitments they'd never publish. Ask why.
Reframe: "Has your compliance team ever discovered an AI system in production that they didn't know existed? What happened?"
Almost every institution has a version of this story. Getting it out unlocks the rest of the conversation.
The EU AI Act does not ask whether your AI works.
It asks whether you can prove it works safely, fairly, and accountably.
That is not a technology question.
It is a governance question many institutions have not yet answered.
Use when the conversation has stayed in the abstract — people discussing regulation in principle but not in practice.
Follow with: "Who in this room has mapped their AI systems against the EU AI Act's high-risk classification list? And what did you find?"
The answers — and the silences — will drive the last five minutes of this topic.
Synthesise 1–2 themes that emerged. Name them explicitly before moving on.
"Everyone has the data. Not everyone knows what to do with it."
AI can find patterns no human analyst would spot.
But when enough firms deploy the same AI on the same data, the pattern disappears.
The edge is not in the algorithm.
It is in what you feed it that no one else has.
The crowding problem is the most honest conversation you can have in this space. Most firms using "alternative" data are using the same vendors. The data stopped being alternative two years ago.
Provocation: "If your alpha model uses satellite data and NLP on earnings calls — how many of your direct competitors are running the same signals on the same data right now?"
Two Sigma's signal retirement point is particularly useful — it surfaces the question of whether firms are measuring alpha decay at all. Most aren't systematically.
If anyone in the room is from an asset manager, the Man Group AHL example tends to prompt honest comparison of their own live vs. backtested performance gap.
Reframe: "Where has AI disappointed you in the investment process? What did you build that looked great in backtest and underperformed live?"
This almost always generates a real story. And real stories unlock the room.
Most AI in investment management is not generating new alpha.
It is helping humans express existing convictions with less friction.
That is genuinely valuable.
But it is not the same thing as finding signal no one else has found.
Deploy when the conversation has been about speed and efficiency rather than genuine signal discovery. It reframes what the room is actually debating.
Follow with: "If your AI is helping you execute faster on views your PMs already had — is that what you told your investors you were doing?"
That question either generates great candour or a very revealing non-answer.
Synthesise 1–2 themes that emerged. Name them explicitly before moving on.
"The pilot worked. Now what?"
Modern AI requires clean, accessible, well-governed data
and flexible, cloud-native infrastructure.
Most financial institutions have
the exact opposite of both.
The honest version of this tension: most banks are building AI on a foundation that was never designed to support it. The AI strategy is more advanced than the infrastructure it's supposed to run on.
Provocation: "When you look at your AI ambitions and your actual infrastructure position — are those two things compatible? And if not, which one changes?"
The Starling comparison is the most useful provocation — not because traditional banks should become fintechs, but because the deployment speed difference is a quantifiable measure of infrastructure debt.
JPMorgan's Fusion platform is worth dwelling on: the data foundation came before the AI. Most institutions are trying to do it the other way round.
Reframe: "Tell me about an AI pilot that never made it to production. What killed it — and was it the technology, the data, the organisation, or the politics?"
That last list — technology / data / organisation / politics — tends to generate immediate, specific, and very honest answers.
The firms furthest ahead on AI implementation are almost never the ones with the best AI strategy.
They are the ones with the cleanest data.
Your AI programme is only as fast
as your data estate will allow.
Deploy when the conversation has focused on models, tools, and vendors — without anyone naming data quality as the real constraint.
Follow with: "On a scale of one to ten — how would you honestly rate the quality and accessibility of the data your AI currently trains on? And what is it costing you that it's not higher?"
No one says ten. The gap between their number and ten is your implementation constraint.
Synthesise the thread across all three topics. What was the recurring theme? Name it explicitly.