Banks are racing to embed AI into customer experience. But in financial services, speed alone will not define success – guardrails will.
As AI moved from back-end efficiency into the customer interface, it is beginning to shape behaviours, decisions and expectations in real time. This raises a more important question than adoption alone: what role should AI play within the business?
Early signals show there is no single answer. Monzo is shaping AI around customer intimacy and financial wellbeing, while Revolut is using it to scale faster and expand, towards the creation of a super app. The technology may be similar, but the intent behind it is not.
That distinction matters. Because in financial services, AI only becomes meaningful when it is anchored in a clear purpose – one that reflects the brand and delivers value in a way customers recognise and trust.
Guardrails turn intent into experience
This is where knowing your aim is critical
Too often, guardrails are discussed in purely technical terms, focused on compliance, risk mitigation, and regulatory alignment. All absolutely essential in financial services, of course. But the more interesting role they play is constructive.
Guardrails shape how AI behaves, how it communicates, and ultimately how it is experienced. They determine whether an interaction feels like guidance or deflection, whether a chatbot builds trust or erodes it.
Crucially, they also shape how organisations adopt AI internally. The most effective approaches do not start with the technology itself, but with a clear understanding of the problem being solved. From there, guardrails create the conditions for experimentation – giving teams the confidence to explore, iterate, and learn without compromising trust or accountability.
In this sense, guardrails are not barriers but tenets that ensure AI delivers on its intended purpose.
The legacy challenge and the challenger advantage
However, guardrails can only be as valuable as the structures they are within.
Traditional banks are often operating under the weight of decades-old systems, fragmented data, and complex governance layers. Many spend disproportionately more than their competitors just to maintain their infrastructure, leaving less room to experiment, iterate, and reinvent.
Challenger banks, by contrast, are unburdened. With less technical debt and fewer entrenched processes, they are able to build AI-native experiences from the ground up.
But this does not mean incumbents are out of the race. In fact, many are actively evolving. Lloyds Banking Group, for example, has been investing heavily in AI-powered customer support, including chatbots designed to handle high-volume, everyday queries more efficiently. These initiatives show how legacy banks are beginning to layer AI onto existing systems in a pragmatic, use-case-led way.
At the same time, recent moves, such as Barclays signalling a return to physical branches, highlight something important: not every problem should be solved with AI. In some cases, the most effective guardrail is knowing when not to automate.
The future is unlikely to be AI-only. It will be AI-appropriate.
Reframing the chatbot problem
What “AI-appropriate” looks like in practice is often most visible in the simplest interactions. Few areas expose poor AI implementation more clearly than chatbots.
Consumers do not dislike AI. They dislike feeling redirected instead of helped, processed instead of understood, and managed instead of supported. This is not a technology issue but rather a positioning and design one.
For legacy banks in particular, this presents a clear opportunity. While they may be constrained by infrastructure, they are also custodians of long-established customer trust. Applied thoughtfully, AI can extend that trust rather than erode it.
But that depends on how it is framed.
When AI is framed as a cost-saving mechanism, it behaves like one. When it is designed as a value-adding service, such as a financial coach, an assistant, or an advisor, it is far more likely to be embraced.
At Metro Bank, this principle is closely tied to authenticity. As Briar Reidy, Head of Brand & Campaigns at Metro Bank, has emphasised, how AI is brought to market matters as much as what it does. If interactions feel driven by efficiency rather than customer need, trust quickly breaks down.
Even at the level of communication, the details matter. In high-stakes environments like finance, users favour clarity, concision, and an appropriate level of formality. Overly casual or overly human-like AI can undermine credibility.
Human, AI, human: the operating model that works
All of this points toward a broader truth. What sits behind these interactions is not just technology, but how AI is designed and governed.
A simple and powerful model is emerging. Human intent defines the goal. AI accelerates the execution. Human oversight ensures quality, accuracy, and accountability.
In regulated industries such as finance, this is not just best practice but essential. Beyond compliance, it reinforces the idea that AI works best not as a replacement for human judgement, but in service of it.
From efficiency to reinvestment
There is, of course, real efficiency to be gained from AI. Content production, internal workflows, and operational tasks can all be accelerated dramatically.
But the most forward-thinking organisations are not simply banking those savings. They are reinvesting them into better customer experiences, more meaningful interactions, and more ambitious, transformative products.
This is where AI moves from incremental improvement to genuine competitive advantage.
Ultimately, the success of AI in financial services marketing services will not be determined by how fast banks adopt it, or how much they automate.
It will be defined by how well they constrain it.
By the clarity of their intent. By the strength of their guardrails. And by their ability to design systems that make AI not just powerful, but appropriate, trustworthy, and genuinely useful.
David Stocks, Head of Strategy at WongDoody
