Banking is having an AI moment. Customer assistants are going live, banks are adding AI copilots to staff workflows, and senior teams want to show speed and intent. From the outside, it looks like a sector moving quickly.
Launches are easy. The real work starts when AI hits product rules, customer operations, approval chains and exception handling. That is where a bank finds out whether it has built real capability or just repackaged the same process.
Often, AI arrives faster than the bank has sorted out the work around it. Product rules sit in one system, pricing in another, documents come in from different channels, and approvals still move across too many teams. The tool may look smart, but the process still does not move cleanly.
Where the theatre starts
Recent weeks have shown how quickly AI has moved up the banking agenda. Revolut and Starling are pushing assistants into view. HSBC has elevated AI leadership. Santander is testing live end-to-end agentic payments, and Bank of America is building AI into adviser workflows. AI now has budget, senior backing and a visible place on the banking agenda.
None of those moves should be dismissed. They show intent, investment and a willingness to push AI beyond slide decks and lab environments. The theatre starts later, when a launch gets mistaken for transformation.
A new assistant can improve access. A copilot can save time. A leadership hire can sharpen direction. All of that is useful. None of it proves, on its own, that a bank has worked AI into the everyday running of the institution. In banking, the stretch between a good demonstration and a dependable operating model is still where most of the serious work sits.
That gap shows up most clearly when AI lands on top of old conditions it did not create. Product logic lives in one place, pricing rules in another, workflow controls somewhere else again. Change then gets slow, expensive and awkward to govern. AI arrives in the middle of that and gets asked to deliver a step-change anyway.
A lot of what gets described as AI transformation still looks further advanced from the outside than it does on the inside. The assistant goes live. The pilot lands well. The interface improves. Then the bank has to carry that progress through live decisions, document handling, policy controls and human oversight. That is usually the point where the applause fades and the operating reality takes over.
The difference between pilots and operating capability
The divide is not between banks that use AI and banks that do not. It is between banks that treat AI as a feature and banks that treat it as part of the operating model. Research suggests that more than 40% of agentic AI projects will be cancelled by the end of 2027, pointing to rising costs, weak business value and inadequate risk controls. Read properly, that is not a warning against AI. It is a warning against treating pilots as if they were production capability.
The banks getting further are doing something more disciplined. They are putting AI inside governed workflows. They are working with cleaner data, clearer ownership and decision paths that can be traced and reviewed. They know where they want fixed rules, where they want adaptive judgment, and where a human needs to step in. That is a very different posture from dropping a model into a messy process and hoping the process improves around it.
Origination is a good place to see the difference. It brings document handling, eligibility, pricing, workflow and decisioning into the same process. Get that right, and AI can cut manual work, speed up approval and make outcomes more consistent. Get it wrong, and the bank ends up with a smarter front end and the same delays underneath.
The same applies in servicing, onboarding, collections and product change. These are the areas where manual work still carries a heavy cost, where inconsistency shows up quickly, and where weak design is hard to hide. Better execution shows up quickly there too. So does weak execution.
What banks should build next
Banks do not need another multi-year replacement programme. They need a cleaner route into production.
Part of that comes down to a simple choice: where does the bank want certainty, and where does it want judgment? Some decisions should remain tightly rules-based. Others benefit from adaptive intelligence. Strong institutions decide that deliberately. They build workflows around it. They decide where escalation sits. They know which actions can move quickly and which ones need human sign-off.
That is why governed data and unified workflows matter so much. AI scales when change stops being bespoke. It scales when product logic can be updated without reopening half the estate. It scales when workflow and decisioning are lined up properly. It scales when the institution can show what the system saw, what it decided, what changed and who approved it.
The UK backdrop is pushing in that direction. The Treasury Committee called in January for practical FCA guidance on AI in financial services by the end of 2026 and for AI-specific stress testing. The Bank of England’s February AI roundtables focused on the responsible adoption of AI and the constraints firms face in deployment. And in April, reports emerged that the UK is considering a standardised testing regime for the general-purpose AI models used by lenders.
The direction is clear. Banks will be expected to show how these systems behave in live settings, how decisions are governed and where accountability sits. That should focus minds in the right way. Financial services has always run on trust, and trust depends on discipline. Governed AI will follow the same rule.
The banks that pull ahead will not be the ones with the loudest AI story. They will be the ones that make AI dependable where banking actually happens: product setup, decisioning, document handling, servicing, compliance and controlled change.
Banking has learned how to launch AI. The next test is whether it can build the operating model that makes AI dependable.
Teo Blidarus, CEO & Co-Founder of FintechOS
